Category Archives: Altai

Altai Private Cloud For Developers

Altai v1.1.1 is out

Hello everybody!

A new version of Altai Private Cloud for Developers 1.1.1 is ready to use. In this release, we support reliable updates from any previous Altai version..

This release is recommended to use for everyone instead of 1.1.0. Update procedure from any previous release is safe and automated – just follow our upgrade guide.

Other important changes:

  • guess root partition of virtual machine images for setting login credentials;
  • add script for LDAP configuration after Altai installation;
  • support instance live migration;
  • fix VNC security problem that occurs for deleted instances;
  • strong validation of Altai configuration parameters to determine installation/updating problems earlier;
  • bugfixes in UI;
  • correct adding records for NS servers on zone creation;
  • purge info about deleted instances from database.

Altai v1.1.0 with LDAP/AD support

Hello, everybody!

We are glad to announce a new version of Altai Private Cloud for Developers.

This update introduces out-of-the-box LDAP/AD support in Altai Cloud. It should help to manage cloud users and authorization in companies where LDAP or AD is used.

Useful links:

 

Preconditions for Common OpenStack Client Library

OpenStack client packages have a long history. It begins in November 2009 when Rackspace Cloud Servers package started. It provided a Python API (the cloudservers module) and a command-line script (cloudservers). Initially, the script was just a stub, but it became a useful CLI utility able to launch, stop, and resize virtual machines.

cloudservers package introduced a library architecture that is used till now. All entities can be split into five groups.

  1. Resources, e.g., a flavor, a server, or an image. Technically, a resource is a Python object and its class is a descendant of the Resource class.
  2. Managers – they provide operations on resources, for example, “list all flavors” or “delete an image”. So, we have a flavor manager, an image manager, and so on. As you may guess, manager’s classes are descendants of the Manager class.
  3. HTTP client provides a convenient interface for managers that send HTTP requests to the server. The HTTP client is also responsible for authentication process that changed a lot after introducing a new Keystone service, so, the newer HTTP clients are a bit more complicated.
  4. Exceptions are normal Python exceptions raised by HTTP client for HTTP error codes. This is a more or less rich hierarchy with exceptions such as Unauthorized, BadRequest, or NotFound.
  5. Client (not to be confused with HTTP client!) puts HTTP client and various managers together (using class composition: HTTP client and managers are members of a client). As a user, you create a client and can immediately perform any API calls:
    # this is the client
    client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)
    # client.flavors is a manager
    all_flavors = client.flavors.list()
    # and all_flavors is a list of resources
    print all_flavors                                         
    

The oldest of currently alive clients (novaclient) was born in January 2011 as a fork of Rackspace Cloud Servers package. Since that time, cloudservers library and CLI script were renamed to novaclient. They support new Nova API that was growing all the time, but these two main functions (a Python API and a command line client) remain unchanged.

About a year later, a new OpenStack client project called keystoneclient was started. It was a flesh of novaclient’s flesh with almost the same architecture with a small difference: client was a child class of HTTP client thus using inheritance, not composition. And, of course, keystoneclient has its own managers and resources (tenant, user, etc.).

Al lot of code required in the new client package was already written in novaclient (the base Resource and Manager and HTTP client). But this code was copied, not imported in keystoneclient. On the one hand, it made these packages independent: you haven’t to install novaclient if you would like to use keystoneclient. On the other hand, the story of duplicated classes diverged and they gained different features available in one package and absent in another.

glanceclient used the same copying approach with the same benefits and pitfalls. However, quantumclient and swiftclient are completely different and I won’t discuss them here.

So, what do we have now?

  1. Keystone server provides tokens with limited time to live. So, that’s natural to obtain an “Unauthorized” error after a series of successful calls. Nova and Keystone clients handle this situation correctly: they do one call to obtain a new fresh token and repeat the fallen query. glanceclient just raises an exception.
  2. Keystone server supports two ways of authentication on a tenant: with user name and password and with an unscoped token. As a response to successful authentication, it returns a scoped token and a catalog of all OpenStack service endpoints (nova, glance, keystone, swift etc.). So, keystoneclient supports both authentication ways while novaclient handles only authentication with user name and password. glanceclient is even less prompt: it requires a scoped token and a Glance server endpoint. It knows nothing about clever Keystone service and you have to do the dirty job. By the way, glanceclient’s shell uses keystoneclient to issue this initial call to Keystone.
  3. All client constructors use different parameters. For example, the thing that is called password in keystoneclient is an api_key in novaclient for historical reasons: it was called apikey (without underscore!) in cloudservices three years ago.
  4. clients have not only different constructors but also diverse behavior: keystoneclient authenticates immediately when you create the client object while novaclient does it lazily during first API call.
  5. Often you would like to make calls to different services. A dashboard or a common command line tool usually requests tenant list from Keystone, image list from Glance, and sends a “launch an instance” command to Nova. With current clients it’s different to share the same token and service endpoint catalog. A simple ways could be just to use a common HTTP client object, but it’s impossible because of incompatible architectures in different client packages.

To solve these problems, we could move the common code to a separate library that would be imported in all three clients. The common library would contain:

  • the base Resource class;
  • the base Manager class;
  • a rich Exceptions hierarchy;
  • a feature rich HTTP client that supports all ways of authentication, handles outdated token faults, and saves the whole service catalog returned by Keystone;
  • the base client class that contains an instance of HTTP client as a member: this way, several clients (e.g., a client for Nova and a client for Keystone) can share the same HTTP client.

I developed a sample implementation of this library and called it python-openstackclient-base. The library is used in Altai Private Cloud, a project of Grid Dynamics. python-openstackclient-base is easy to use:

from openstackclient_base.client import HttpClient
http_client = HttpClient(username="...", password="...", tenant_name="...", auth_uri="...")

# Nova Compute API client
from openstackclient_base.nova.client import ComputeClient
# create a client class and use servers manager
print ComputeClient(http_client).servers.list()   

# Identity (Keystone) Public API client
from openstackclient_base.keystone.client import IdentityPublicClient 
# use the same HTTP client as above
print IdentityPublicClient(http_client).tenants.list()

Now I’m going to put this library to oslo-incubator project and use it in all three clients. When oslo-incubator will be mature, it will be imported in OpenStack projects as I want, but now its code will be just copied literally to other projects. However, it’s also quite satisfiable since it will reach all the goals mentioned above.

Altai v1.0.2 is out

Hello everybody!

A new version of Altai Private Cloud for Developers 1.0.2 is ready to use. In this release, we reviewed and cleaned up third-party packages and made bugfixes, primarily to user interface.

This release is recommended to use for everyone instead of 1.0.1. Update procedure is safe and automated – just follow our upgrade guide.

What’s New in Altai 1.0.2 from Maintainer’s Point of View

A new version of Altai Private Cloud for Developers 1.0.2 was released.

The new release is devoted to cleaning package dependencies. Also, a bunch of bugfixes was made, primarily to user interface. Let’s see what’s new in Altai 1.0.2 from maintainer’s point of view.

In previous releases, we had this motto: “Take basic CentOS/RHEL, take our source RPMs, and you will be able to build the whole Altai and install it”.
Altai RPMs (both source and binary) were grouped in two repositories: “main” and “deps”. “deps” were packages rebuilt from their third-party source RPMs without changes. All other packages went to “main”, including customized third-party software (like nginx with uploading module) and Altai proper packages like Focus web UI.
Since we built both “main” and “deps” packages, we signed them with Grid Dynamics signature.

This model had one pitfall: we had to maintain plenty of well-known packages that were not included into basic CentOS/RHEL, such as Rabbit MQ or Erlang. That made our repositories really tremendous: 500 MiB, 100 MiB for “main” and 400 for “deps”! Imagine how wasteful is add these tons of unchanged third-party packages to every release. That’s why we tried the following solution in the previous release (1.0.1): include a chain of repositories so almost all unchanged packaged are downloaded from 1.0.0 release and 1.0.1 repository contains only packages to upgrade. As it was shown in this article, YUM can handle thousands of repositories simultaneously without performance problems. So, the repository chain approach significantly saves space for newer releases, but it leads to some maintaining problems.

For example, imagine if one package should be downgraded in the next release. We can call yum downgrade package-name in Altai installer, but how could we guarantee that this packages will not be updated later accidentally by user in a yum update procedure?

A more complex problem is that it’s difficult to determine a list of all packages that belong to given release if they are spread between lots of repositories. Even more, build a new release repository being the next in the repo chain is a nontrivial task.

Fortunately, if you decide to use EPEL packages, you’ll say farewell to all these obstacles. First, the repository becomes significantly smaller just because now you haven’t to rebuild heaps of packages. Now we have only 160 MiB of binary packages. Second, with a small repository you haven’t to use cunning repository chain – everything becomes transparent and easy to support.

It’s worth to say that it using EPEL packages isn’t as simple as it seems to be. Some important Python libraries are installed in such places that you would have to patch your programs or they wouldn’t find their dependencies. We decided to reject these libraries and package them ourselves. Luckily, the most of EPEL packages were able to be used in Altai without complications.

As far as we reviewed all Altai packages, we chosen another repository layout. Let’s briefly describe it.

  • centos6: these packages are maintained and developed by Grid Dynamics team. This group contains customized OpenStack and a lot of home-grown packages signed by Altai team. Sources of these packages are available at GitHub.
  • deps: these packages are not a subject of Grid Dynamics development. This category includes the following subdirectories.
    1. centos6-updates – necessary update packages for CentOS 6 signed by CentOS.
    2. epel – necessary packages from EPEL repositories signed by EPEL.
    3. misc – packages built and signed by Altai team.
    4. misc-srpms – source RPMs for misc and signed by Altai team.

As you see, we still provide sources of all packages we’ve built as it’s appropriate for an open source project.

As it was mentioned above, we keep Altai sources in git. There are two steps between a git repository and a binary RPM. First, a source RPM must be built from a git repo. Second, a binary RPM is built from a source one.

Each step is a not-trivial operation. A source RPM must contain all information required for package build, including source tarball, spec file, and possibly patches that should be applied to unpacked tarball before build. ALT Linux team even developed a powerful toolkit called GEAR (Get Every Archive from git package Repository). GEAR contains tens of individual CLI programs for different purposes, including composing a source RPM from a git repository and importing a tarball to git. We used GEAR in previous releases, but the only feature we needed was git-to-source RPM conversion. Even more, almost every conversion was trivial because we develop software keeping in mind that they will be packaged to RPMs. GEAR, on its turn, allows to maintain third-party software that is under active or slow development and need to be patched before packaging.

Obviously, multifunctional GEAR led to boilerplate configuration files. That’s why we simplified git-to-source RPM conversion as in our case it could be done with a small and clear script. And there is now need to write GEAR rules file: it’s sufficient to just place a spec file to git repository.

Frankly speaking, the second step (source-to-binary RPM conversion) is trickier than the first, but, fortunately, there is a ready solution – mock tool used in Fedora and EPEL. mock prepares a clean and safe build chroot environment for build operation. We have already used mock for previous releases and we haven’t ceased to take its advantages.

So, Altai 1.0.2 is easy to develop, maintain, and support and in the same time more foolproof.

Out of the box and into the cloud – Altai Private Cloud For DevOps takes you there

To our blog community

We are very excited to announce the launch of the Altai Private Cloud For DevOps solution.

Altai combines the best technologies from the OpenStack community with extensions developed by Grid Dynamics—enhancing its stability and scalability, optimizing the developer experience, and shortening time-to-market for new applications.

New key features include:

  • installation in under 15 minutes
  • VLAN-based mode for network isolation between projects
  • self-service dashboard for administrators, development managers, and engineers
  • built-in accounting and billing system, with reports available via dashboard
  • built-in DNS based on PowerDNS
  • a new method of handling project private and global images with global images managed by administrators

At Grid Dynamics we are committed to building optimized solutions for eCommerce businesses — solutions that are not just geared toward production deployments, but are also ideal for development and QA operations. We are proud to offer faster, better solutions to match and resolve our customers’ toughest challenges.

Grid Dynamics has been using OpenStack inside the company since the Diablo release — adding new extensions and building new services around them such as our accounting system, which we have already opensourced.

Now, we would like to bring you our latest offering — the Altai Private Cloud For Developers.

Try the new cloud for yourself! Visit us at

http://www.griddynamics.com/solutions/altai-private-cloud-for-developers/

and learn more about this exciting new cloud distribution.