YUM Repository Limit

Alessio Ababilov's Blog

Consider that we develop a project and we have an huge repo of different RPMs. We want to release new versions and publish them into new repos, but we know that usually only several packages are upgraded. Nobody wants to copy all unchanged packages to a new repo. However, we could keep only changed packages in the new repo and refer to both repos in .repo file: yum will choose the latest packages.

[my-project-1.0]
name=My Project 1.0
baseurl=http://myserver.org/repos/1.0
enabled=1

[my-project-1.1]
name=My Project 1.1
baseurl=http://myserver.org/repos/1.1
enabled=1

Looks nice, but how many repositories can be supported by yum? Will it crash or work extremely slow if there are hundreds of repositories with different packages? Let us check it up.

We will create 200 repositories with different versions of packages many-repos-a, many-repos-b, and many-repos-c. Repo #(2 * i) will contain packages many-repos-a-#i and many-repos-a-#(2 * i) and repo #(2 * i +…

View original post 362 more words

Out of the box and into the cloud – Altai Private Cloud For DevOps takes you there

To our blog community

We are very excited to announce the launch of the Altai Private Cloud For DevOps solution.

Altai combines the best technologies from the OpenStack community with extensions developed by Grid Dynamics—enhancing its stability and scalability, optimizing the developer experience, and shortening time-to-market for new applications.

New key features include:

  • installation in under 15 minutes
  • VLAN-based mode for network isolation between projects
  • self-service dashboard for administrators, development managers, and engineers
  • built-in accounting and billing system, with reports available via dashboard
  • built-in DNS based on PowerDNS
  • a new method of handling project private and global images with global images managed by administrators

At Grid Dynamics we are committed to building optimized solutions for eCommerce businesses — solutions that are not just geared toward production deployments, but are also ideal for development and QA operations. We are proud to offer faster, better solutions to match and resolve our customers’ toughest challenges.

Grid Dynamics has been using OpenStack inside the company since the Diablo release — adding new extensions and building new services around them such as our accounting system, which we have already opensourced.

Now, we would like to bring you our latest offering — the Altai Private Cloud For Developers.

Try the new cloud for yourself! Visit us at

http://www.griddynamics.com/solutions/altai-private-cloud-for-developers/

and learn more about this exciting new cloud distribution.

LVM disks support in OpenStack Nova Folsom

Recently, we have implemented LVM disk support functionality (based on our work for Diablo) and successfully delivered it to upstream.

New LVM image backend can be enabled by setting libvirt_images_type flag to value lvm.

You must also specify volume group name for VM disks, which is done by setting libvirt_images_volume_group option (for example libvirt_images_volume_group=NovaVG).

Backend supports usual logical volumes with full space allocation. However, it is possible to use sparse logical volumes (which are created with option –virtualsize). In this case, you need specify libvirt_sparse_logical_volumes=True in nova.conf. When this mode is enabled, nova will file warning messages to log on attempt to allocate sparse logical volume with possible size, that exceeds size of volume group used to hold VM disks.

For more information on this functionality please see blueprint.

You can also review code changes here.

RHEL and Centos RPM packages for OpenStack Essex (2012.1) release is out

We are happy to inform you, that we have prepared Essex RPM packages for RHEL and Centos. We tested packages for compatibility with Scientific Linux as well. For Centos you should also use EPEL repository.

RHEL yum repo: http://yum.griddynamics.net/yum/essex/.

Centos yum repo: http://yum.griddynamics.net/yum/essex-centos/

Essex Packages:

  • openstack-nova-essex
  • openstack-glance-essex
  • openstack-keystone-essex
  • openstack-swift-essex
  • openstack-quantum-essex
  • python-quantumclient
  • python-novaclient-essex
Setup instructions for essex (for testing, not for production) :

http://openstack.griddynamics.com/setup_single_essex.html.

We are waiting for your questions/comments at our mailing list: openstack@griddynamics.com.

Changes to the handling of local storage for the libvirt driver

Currently Libvirt and Xen virtualisation drivers for OpenStack Nova handle local storage differently. The amount of of local space reserved for an instance is determined by the instance type (i.e. flavor) and known as local_gb, here comes the difference.

The Libvirt backend:

  • Downloads the image from glance
  • Tries to resize this image to 10GB (can be adjusted using --minimum_root_size flag)
  • Attaches a second disk (known as disk.local)with the size of local_gb to the instance

The Xen driver:

  • Downloads the image from glance
  • Creates a vdi from this image
  • Resizes a vdi to the size of local_gb

We decided to add to the Libvirt backend optional ability to work the same way as Xen backend does:

To enable this strategy you need to add --disable_disk_local=true flag to your nova.conf. The drawback of this strategy is that you cannot use m1.tiny flavor anymore because it has the local_db set to zero. All other flavors work properly.

Add Support for Local Volumes in OpenStack

Local volumes is functionality similar to regular nova volumes, but volume data always stored on local disk. It can helps when you need to resizable disk for VM, but do not want to attach volume by network.

This functionality was implemented as Nova API extension (gd-local-volumes). You can manage local volumes from command line with nova2ools-local-volumes command. Local volumes are stored in same format, as VM disks (see Using LVM as Disk Storage for OpenStack). You can mix nova-volumes and local volumes for the same instance.

To create local volume using nova2ools, just use “nova2ools-local-volumes create” command:

$ nova2ools-local-volumes create –vm <Id or name of VM> –size <Size> –device <Path to device inside Guest OS (example: /dev/vdb)>

This command will create local volume for given VM with specified size. There are some caveats too. According to libvirt behavior https://bugzilla.redhat.com/show_bug.cgi?id=693372, device name in guest OS can be different from what you specified in –device option. For instance, device names can simply be chosen in lexicographical order (vda, vdb, vdc and so on). Another caveat is that device may not be seen in Guest OS unless you reboot VM. Each local volume is defined by it’s VM id and device name.

You can create new local volume from existing local volume snapshot using –snapshot option like this:

$ nova2ools-local-volumes create –vm <Id or name of VM> –snapshot <Id of local volume snapshot> –size <Size> –device <Path to device inside Guest OS (example: /dev/vdb)>

You can omit –size option, when creating local volume from snapshot, then local volume will have actual snapshot size. If you specify –size option, then local volume will be resized to this size.

NO UNDERLYING FILE SYSTEM RESIZE WILL BE PERFORMED, so be careful with that!

To see list of all created local volumes simply write:

$ nova2ools-local-volumes list

After you create local volume for particular VM, you can perform these operations on it:

  • Take snapshot of local volume
  • Resize local volume
  • Delete local volume

To create snapshot use:

$ nova2ools-local-volumes snapshot –id <Id of local volume> –name <Name of snapshot>

You can see your snapshot in list provided by

$ nova2ools-images list

command.

Resize of local volume (with no underlying filesystem resize) can be performed by command:

$ nova2ools-local-volumes resize –id <Volume Id> –size <New size>

Finally, you can delete local volume that are no longer needed:

$ nova2ools-local-volumes delete –id <Volume Id>

In conclusion, essence facts about using local volumes:

  • They are allocated on the same instance where VM runs
  • They are defined by id of VM and device name
  • They use same storage type as VM disks (raw, qcow2 or LVM)
  • Snapshot functionality depends on backend: snapshots are allowed for qcow2 and LVM (on suspended instance) but can’t be made on raw.
  • Quota is enabled for local volumes too, just like for ordinary volumes
  • They are linked with particular VM forever and can only be deleted, not detached and attached again.
  • They are attached to VM just like usual VM disks without any iSCSI
  • Resize of local volume doesn’t perform underlying filesystem resize
  • They will be deleted if you delete VM, that uses them

Finally, some notes if you are using local volumes with LVM storage type. As was mentioned in Using LVM as Disk Storage for OpenStack, LVM snapshots can either be taking on running instance with force_snapshot flag specified or on suspended instance. Same thing applies when you try to snapshot local volume with LVM backend. However, we spotted that suspend\resume Nova API calls have been disabled by some reason, so we turned them on and you can use suspend\resume to take snapshots from LVM local volumes.

Hopefully, you will find this feature useful, when you need to allocate just local disk for particular VM, not some volume anywhere in the cluster.

Sources are available on GitHub:

https://github.com/griddynamics/nova/commit/f354279158ca702baf9226e19c3584de58541ccc

REST API Documentation:

http://www.griddynamics.com/openstack/docs/local-volumes-api.html

Image registering in nova2ools

You (as a cloud client) couldn’t register your own image using python-nova-client  as you could do in euca2ools. But euca2ools is not native client for OpenStack and we decided to add this ability to our own OpenStack client – nova2ools.

Now you can register images for your own needs as a cloud client (not as a cloud admin. In that case you can register images using `nova manage`). There are three options:

  1. You register stand-alone image – that image does not depend on any other images
  2. You register ramdisk, kernel and rootfs images separately (three commands) – you should register ramdisk and kernel images at first. Then you register rootfs depending on these two images
  3. You register ramdisk, kernel and rootfs simultaneously (one command) – it simply registers these images in a single command. The result of this command is the same as in the second option but it is more convenient

For registering image perform the following command:

$ nova2ools-images register --path=<path to image in the local machine> --name=<image name> [--kernel=<kernel id> --ramdisk=<ramdisk id> --public=F]  —  here you can optionally define kernel and ramdisk you want your image be dependent on. Flag “public” determines whether image will be public (accessible by clients of any other projects) or private (accessible only within your project)

For registering images separately you can perform commands:

$ nova2ools-images register-ramdisk --path=<ramdisk path> --name=<image name>  — registering ramdisk image

$ nova2ools-images register-kernel --path=<kernel path> --name=<image name>  —  registering kernel image

$ nova2ools-images register --path=<rootfs path> --name=<image name>
--kernel=<kernel id> --ramdisk=<ramdisk id>
  —  registering final image

You can also do the same things in one command:

$ nova2ools-images register-all --image=<rootfs path> --kernel=<kernel path> --ramdisk=<ramdisk path> --name=<image name>  —  this is a good choice if you know all these three images at the time of image registering

Also there are some special things you should keep in mind to work with images properly:

  1. If your cloud uses Keystone as authentication service you have to specify USE_KEYSTONE=true environment variable in the client machine
  2. If Keystone is not using then you should manually specify GLANCE_URL environment variable

DNS management system for OpenStack

`nova-dns` package was developed to solve two tasks:

  • map instance’s hostnames to ip addresses in DNS
  • provide REST API to manage DNS

To solve first task service monitor message bus (RabbitMQ). For every started instance service add DNS record, for terminated – remove.  DNS name is choose in form hostname.tenant_name.root_zone. If zone for tenant_name doesn’t exists yet, it is created and populated automatically.

To solve second task service start REST API server on specified ip/port. REST API supports authentication against keystone and utilize keystone’s RBAC.

As a DNS backend PowerDNS is used, but other backends (for example for Bind) can be added later.

Next we are going to add support for PTR zones and  add ability to create personal zone for every started instance.

Links: