A new version of Altai Private Cloud 1.0.1 is ready to use. This is bugfix release. It is recommended to use for everybody instead of 1.0.0. Read our upgrade guide to do this.
Consider that we develop a project and we have an huge repo of different RPMs. We want to release new versions and publish them into new repos, but we know that usually only several packages are upgraded. Nobody wants to copy all unchanged packages to a new repo. However, we could keep only changed packages in the new repo and refer to both repos in
.repo file: yum will choose the latest packages.
[my-project-1.0] name=My Project 1.0 baseurl=http://myserver.org/repos/1.0 enabled=1 [my-project-1.1] name=My Project 1.1 baseurl=http://myserver.org/repos/1.1 enabled=1
Looks nice, but how many repositories can be supported by yum? Will it crash or work extremely slow if there are hundreds of repositories with different packages? Let us check it up.
We will create 200 repositories with different versions of packages
many-repos-a, many-repos-b, and
#(2 * i) will contain packages
many-repos-a-#(2 * i) and repo
#(2 * i +…
View original post 362 more words
To our blog community
We are very excited to announce the launch of the Altai Private Cloud For DevOps solution.
Altai combines the best technologies from the OpenStack community with extensions developed by Grid Dynamics—enhancing its stability and scalability, optimizing the developer experience, and shortening time-to-market for new applications.
New key features include:
At Grid Dynamics we are committed to building optimized solutions for eCommerce businesses — solutions that are not just geared toward production deployments, but are also ideal for development and QA operations. We are proud to offer faster, better solutions to match and resolve our customers’ toughest challenges.
Grid Dynamics has been using OpenStack inside the company since the Diablo release — adding new extensions and building new services around them such as our accounting system, which we have already opensourced.
Now, we would like to bring you our latest offering — the Altai Private Cloud For Developers.
Try the new cloud for yourself! Visit us at
and learn more about this exciting new cloud distribution.
Recently, we have implemented LVM disk support functionality (based on our work for Diablo) and successfully delivered it to upstream.
New LVM image backend can be enabled by setting libvirt_images_type flag to value lvm.
You must also specify volume group name for VM disks, which is done by setting libvirt_images_volume_group option (for example libvirt_images_volume_group=NovaVG).
Backend supports usual logical volumes with full space allocation. However, it is possible to use sparse logical volumes (which are created with option –virtualsize). In this case, you need specify libvirt_sparse_logical_volumes=True in nova.conf. When this mode is enabled, nova will file warning messages to log on attempt to allocate sparse logical volume with possible size, that exceeds size of volume group used to hold VM disks.
For more information on this functionality please see blueprint.
You can also review code changes here.
We are happy to inform you, that we have prepared Essex RPM packages for RHEL and Centos. We tested packages for compatibility with Scientific Linux as well. For Centos you should also use EPEL repository.
RHEL yum repo: http://yum.griddynamics.net/yum/essex/.
Centos yum repo: http://yum.griddynamics.net/yum/essex-centos/
We are waiting for your questions/comments at our mailing list: email@example.com.
Currently Libvirt and Xen virtualisation drivers for OpenStack Nova handle local storage differently. The amount of of local space reserved for an instance is determined by the instance type (i.e. flavor) and known as
local_gb, here comes the difference.
The Libvirt backend:
disk.local)with the size of
local_gbto the instance
The Xen driver:
We decided to add to the Libvirt backend optional ability to work the same way as Xen backend does:
To enable this strategy you need to add
--disable_disk_local=true flag to your
nova.conf. The drawback of this strategy is that you cannot use
m1.tiny flavor anymore because it has the
local_db set to zero. All other flavors work properly.
Local volumes is functionality similar to regular nova volumes, but volume data always stored on local disk. It can helps when you need to resizable disk for VM, but do not want to attach volume by network.
This functionality was implemented as Nova API extension (gd-local-volumes). You can manage local volumes from command line with nova2ools-local-volumes command. Local volumes are stored in same format, as VM disks (see Using LVM as Disk Storage for OpenStack). You can mix nova-volumes and local volumes for the same instance.
To create local volume using nova2ools, just use “nova2ools-local-volumes create” command:
$ nova2ools-local-volumes create –vm <Id or name of VM> –size <Size> –device <Path to device inside Guest OS (example: /dev/vdb)>
This command will create local volume for given VM with specified size. There are some caveats too. According to libvirt behavior https://bugzilla.redhat.com/show_bug.cgi?id=693372, device name in guest OS can be different from what you specified in –device option. For instance, device names can simply be chosen in lexicographical order (vda, vdb, vdc and so on). Another caveat is that device may not be seen in Guest OS unless you reboot VM. Each local volume is defined by it’s VM id and device name.
You can create new local volume from existing local volume snapshot using –snapshot option like this:
$ nova2ools-local-volumes create –vm <Id or name of VM> –snapshot <Id of local volume snapshot> –size <Size> –device <Path to device inside Guest OS (example: /dev/vdb)>
You can omit –size option, when creating local volume from snapshot, then local volume will have actual snapshot size. If you specify –size option, then local volume will be resized to this size.
NO UNDERLYING FILE SYSTEM RESIZE WILL BE PERFORMED, so be careful with that!
To see list of all created local volumes simply write:
$ nova2ools-local-volumes list
After you create local volume for particular VM, you can perform these operations on it:
To create snapshot use:
$ nova2ools-local-volumes snapshot –id <Id of local volume> –name <Name of snapshot>
You can see your snapshot in list provided by
$ nova2ools-images list
Resize of local volume (with no underlying filesystem resize) can be performed by command:
$ nova2ools-local-volumes resize –id <Volume Id> –size <New size>
Finally, you can delete local volume that are no longer needed:
$ nova2ools-local-volumes delete –id <Volume Id>
In conclusion, essence facts about using local volumes:
Finally, some notes if you are using local volumes with LVM storage type. As was mentioned in Using LVM as Disk Storage for OpenStack, LVM snapshots can either be taking on running instance with force_snapshot flag specified or on suspended instance. Same thing applies when you try to snapshot local volume with LVM backend. However, we spotted that suspend\resume Nova API calls have been disabled by some reason, so we turned them on and you can use suspend\resume to take snapshots from LVM local volumes.
Hopefully, you will find this feature useful, when you need to allocate just local disk for particular VM, not some volume anywhere in the cluster.
Sources are available on GitHub:
REST API Documentation:
You (as a cloud client) couldn’t register your own image using python-nova-client as you could do in euca2ools. But euca2ools is not native client for OpenStack and we decided to add this ability to our own OpenStack client – nova2ools.
Now you can register images for your own needs as a cloud client (not as a cloud admin. In that case you can register images using `nova manage`). There are three options:
For registering image perform the following command:
$ nova2ools-images register --path=<path to image in the local machine> --name=<image name> [--kernel=<kernel id> --ramdisk=<ramdisk id> --public=F] — here you can optionally define kernel and ramdisk you want your image be dependent on. Flag “public” determines whether image will be public (accessible by clients of any other projects) or private (accessible only within your project)
For registering images separately you can perform commands:
$ nova2ools-images register-ramdisk --path=<ramdisk path> --name=<image name> — registering ramdisk image
$ nova2ools-images register-kernel --path=<kernel path> --name=<image name> — registering kernel image
$ nova2ools-images register --path=<rootfs path> --name=<image name> — registering final image
--kernel=<kernel id> --ramdisk=<ramdisk id>
You can also do the same things in one command:
$ nova2ools-images register-all --image=<rootfs path> --kernel=<kernel path> --ramdisk=<ramdisk path> --name=<image name> — this is a good choice if you know all these three images at the time of image registering
Also there are some special things you should keep in mind to work with images properly:
`nova-dns` package was developed to solve two tasks:
To solve first task service monitor message bus (RabbitMQ). For every started instance service add DNS record, for terminated – remove. DNS name is choose in form hostname.tenant_name.root_zone. If zone for tenant_name doesn’t exists yet, it is created and populated automatically.
To solve second task service start REST API server on specified ip/port. REST API supports authentication against keystone and utilize keystone’s RBAC.
As a DNS backend PowerDNS is used, but other backends (for example for Bind) can be added later.
Next we are going to add support for PTR zones and add ability to create personal zone for every started instance.