Category Archives: Development

Development Posts

Preconditions for Common OpenStack Client Library

OpenStack client packages have a long history. It begins in November 2009 when Rackspace Cloud Servers package started. It provided a Python API (the cloudservers module) and a command-line script (cloudservers). Initially, the script was just a stub, but it became a useful CLI utility able to launch, stop, and resize virtual machines.

cloudservers package introduced a library architecture that is used till now. All entities can be split into five groups.

  1. Resources, e.g., a flavor, a server, or an image. Technically, a resource is a Python object and its class is a descendant of the Resource class.
  2. Managers – they provide operations on resources, for example, “list all flavors” or “delete an image”. So, we have a flavor manager, an image manager, and so on. As you may guess, manager’s classes are descendants of the Manager class.
  3. HTTP client provides a convenient interface for managers that send HTTP requests to the server. The HTTP client is also responsible for authentication process that changed a lot after introducing a new Keystone service, so, the newer HTTP clients are a bit more complicated.
  4. Exceptions are normal Python exceptions raised by HTTP client for HTTP error codes. This is a more or less rich hierarchy with exceptions such as Unauthorized, BadRequest, or NotFound.
  5. Client (not to be confused with HTTP client!) puts HTTP client and various managers together (using class composition: HTTP client and managers are members of a client). As a user, you create a client and can immediately perform any API calls:
    # this is the client
    client = Client(USERNAME, PASSWORD, PROJECT_ID, AUTH_URL)
    # client.flavors is a manager
    all_flavors = client.flavors.list()
    # and all_flavors is a list of resources
    print all_flavors                                         
    

The oldest of currently alive clients (novaclient) was born in January 2011 as a fork of Rackspace Cloud Servers package. Since that time, cloudservers library and CLI script were renamed to novaclient. They support new Nova API that was growing all the time, but these two main functions (a Python API and a command line client) remain unchanged.

About a year later, a new OpenStack client project called keystoneclient was started. It was a flesh of novaclient’s flesh with almost the same architecture with a small difference: client was a child class of HTTP client thus using inheritance, not composition. And, of course, keystoneclient has its own managers and resources (tenant, user, etc.).

Al lot of code required in the new client package was already written in novaclient (the base Resource and Manager and HTTP client). But this code was copied, not imported in keystoneclient. On the one hand, it made these packages independent: you haven’t to install novaclient if you would like to use keystoneclient. On the other hand, the story of duplicated classes diverged and they gained different features available in one package and absent in another.

glanceclient used the same copying approach with the same benefits and pitfalls. However, quantumclient and swiftclient are completely different and I won’t discuss them here.

So, what do we have now?

  1. Keystone server provides tokens with limited time to live. So, that’s natural to obtain an “Unauthorized” error after a series of successful calls. Nova and Keystone clients handle this situation correctly: they do one call to obtain a new fresh token and repeat the fallen query. glanceclient just raises an exception.
  2. Keystone server supports two ways of authentication on a tenant: with user name and password and with an unscoped token. As a response to successful authentication, it returns a scoped token and a catalog of all OpenStack service endpoints (nova, glance, keystone, swift etc.). So, keystoneclient supports both authentication ways while novaclient handles only authentication with user name and password. glanceclient is even less prompt: it requires a scoped token and a Glance server endpoint. It knows nothing about clever Keystone service and you have to do the dirty job. By the way, glanceclient’s shell uses keystoneclient to issue this initial call to Keystone.
  3. All client constructors use different parameters. For example, the thing that is called password in keystoneclient is an api_key in novaclient for historical reasons: it was called apikey (without underscore!) in cloudservices three years ago.
  4. clients have not only different constructors but also diverse behavior: keystoneclient authenticates immediately when you create the client object while novaclient does it lazily during first API call.
  5. Often you would like to make calls to different services. A dashboard or a common command line tool usually requests tenant list from Keystone, image list from Glance, and sends a “launch an instance” command to Nova. With current clients it’s different to share the same token and service endpoint catalog. A simple ways could be just to use a common HTTP client object, but it’s impossible because of incompatible architectures in different client packages.

To solve these problems, we could move the common code to a separate library that would be imported in all three clients. The common library would contain:

  • the base Resource class;
  • the base Manager class;
  • a rich Exceptions hierarchy;
  • a feature rich HTTP client that supports all ways of authentication, handles outdated token faults, and saves the whole service catalog returned by Keystone;
  • the base client class that contains an instance of HTTP client as a member: this way, several clients (e.g., a client for Nova and a client for Keystone) can share the same HTTP client.

I developed a sample implementation of this library and called it python-openstackclient-base. The library is used in Altai Private Cloud, a project of Grid Dynamics. python-openstackclient-base is easy to use:

from openstackclient_base.client import HttpClient
http_client = HttpClient(username="...", password="...", tenant_name="...", auth_uri="...")

# Nova Compute API client
from openstackclient_base.nova.client import ComputeClient
# create a client class and use servers manager
print ComputeClient(http_client).servers.list()   

# Identity (Keystone) Public API client
from openstackclient_base.keystone.client import IdentityPublicClient 
# use the same HTTP client as above
print IdentityPublicClient(http_client).tenants.list()

Now I’m going to put this library to oslo-incubator project and use it in all three clients. When oslo-incubator will be mature, it will be imported in OpenStack projects as I want, but now its code will be just copied literally to other projects. However, it’s also quite satisfiable since it will reach all the goals mentioned above.

Advertisements

fping Support in OpenStack

OpenStack is very good at launching virtual machines – that’s its purpose, isn’t it? But usually you want to monitor state of you machines somehow, and there are many reasonable ways.

  1. You can test daemons running on the machine, e.g., check up open ports or poll known services. Of course, this approach means that you know exactly what services should be running – and this is the most precise way to test system health.
  2. You can ask hypervisor if the machine is ok. That’s a very dirty check since hypervisor will likely report that VM is active while its operating system kernel can encounter problems.
  3. A compromise settlement may be pinging the machine. It’s a general solution since a lot of VMs respond to ping normally. Sure, VM can ignore ping or its daemons can have problems while host is responding to ping, but this solution is far easier to implement then check each machine according to an individual plan.

Let’s concentrate on the last two approaches. I would like to launch a machine and check it.

[root@a001 ~]# nova image-list
+--------------------------------------+--------------+--------+--------+
| ID                                   | Name         | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 960dc70a-3e0e-496a-b8da-0e9cd91d3a44 | selenium-img | ACTIVE |        |
+--------------------------------------+--------------+--------+--------+
[root@a001 ~]# nova boot --flavor m1.small --image 960dc70a-3e0e-496a-b8da-0e9cd91d3a44 selenium-0
...
[root@a001 ~]# nova list
+--------------------------------------+-------------------+--------+-------------------------+
| ID                                   | Name              | Status | Networks                |
+--------------------------------------+-------------------+--------+-------------------------+
| a9060a07-d32a-4dcf-8387-1c7d69f897dc | selenium-0        | ACTIVE | selenium-net=10.109.0.4 |
+--------------------------------------+-------------------+--------+-------------------------+
[root@a001 ~]# fping 10.109.0.4
10.109.0.4 is unreachable

As you can see, VM status is reported as active, but the machine has not booted really. Even more, consider a damaged image (I use a text file for this purpose):

[root@a001 ~]# glance index 
ID                                   Name                           Disk Format          Container Format     Size          
------------------------------------ ------------------------------ -------------------- -------------------- --------------
7d8007fe-a63c-4d02-8edf-a6cc19fa1d73 text                           qcow2                ovf                           17043
[root@a001 ~]# nova boot --flavor m1.small --image 7d8007fe-a63c-4d02-8edf-a6cc19fa1d73 text-0
[root@a001 ~]# nova list
+--------------------------------------+-------------------+--------+-------------------------+
| ID                                   | Name              | Status | Networks                |
+--------------------------------------+-------------------+--------+-------------------------+
| a9060a07-d32a-4dcf-8387-1c7d69f897dc | selenium-0        | ACTIVE | selenium-net=10.109.0.4 |
| 461e73e4-7f88-4c8f-bb1f-49df9ec18d84 | text-0            | ACTIVE | selenium-net=10.109.0.5 |
+--------------------------------------+-------------------+--------+-------------------------+

Nova bravely reports that the new instance is active, but it obviously is not functioning: a text file is not a disk image with an operating system. And fping reveals that the VM is ill:

[root@a001 ~]# fping 10.109.0.5
10.109.0.5 is unreachable

We can extend nova API adding this fping feature. Nova will run fping for requested instances and report which ones seems to be truly alive. I have developed this extension and it was accepted to Grizzly on November 16, 2012 (https://github.com/openstack/nova/commit/a220aa15b056914df1b9debc95322d01a0e408e8).

fping API is simple and straightforward. We can ask to check all instances or a single one. In fact, we have two API calls.

  1. GET /os-fping/<uuid> – check a single instance.
  2. GET /os-fping?[all_tenants=1]&[include=uuid[,uuid...][&exclude=...] – check all VMs in the current project. If all_tenants is requested, data for all projects is returned (by default, this option is allowed only for admins). include and exclude are parameters specifying VM masks. These parameters are mutually exclusive and exclude is ignored if include is specified. Consider that VM list is VM_all, then if include list is set, the only VM_all * VM_to_include (set intersection) will be tested – thus we can check several instances in a single API call. If exclude list is provided, VM_all -
    VM_to_exclude
    (set difference) will be polled – thus we can skip testing for instances that are not supposed to respond to ping.

fping increases I/O load on nova-api node, so, by default, fping API is limited to 12 calls in an hour (nevertheless it’s a single or several instances poll).

I have added nova fping support to python-novaclient (https://github.com/openstack/python-novaclient/commit/ff69e4d3830f463afa48ca432600224f29a2c238) making easy to write a daemon in Python that will periodically check instance states and send notifications on detected problems. This daemon is available in Grid Dynamics Altai Private Cloud For Developers and is called instance-notifier (https://github.com/altai/instance-notifier). The daemon is installed and configured by Altai installer automatically. Despite Altai 1.0.2 runs Essex, not Grizzly, I have added nova-fping as an additional extension package.

Let’s see how to use fping from client side. We have three instances: selenium-0 (shut off), selenium-1 (up and running), and text (invalid image). Nova reports that they are active:

[root@a001 /]# nova list
+--------------------------------------+-------------------+--------+-------------------------+
| ID                                   | Name              | Status | Networks                |
+--------------------------------------+-------------------+--------+-------------------------+
| a9060a07-d32a-4dcf-8387-1c7d69f897dc | selenium-0        | ACTIVE | selenium-net=10.109.0.4 |
| 20325b87-6858-49df-ab30-795a189dd2ac | selenium-1        | ACTIVE | selenium-net=10.109.0.3 |
| 461e73e4-7f88-4c8f-bb1f-49df9ec18d84 | text-0            | ACTIVE | selenium-net=10.109.0.5 |
+--------------------------------------+-------------------+--------+-------------------------+

Check them with nova fping!

[root@a001 /]# python
Python 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) 
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from novaclient.v1_1 import Client
>>> cl = Client("admin", "topsecret", "systenant", "http://localhost:5000/v2.0")
>>> ping_list = cl.fping.list()
>>> ping_list
[<Fping: 461e73e4-7f88-4c8f-bb1f-49df9ec18d84>, <Fping: a9060a07-d32a-4dcf-8387-1c7d69f897dc>, <Fping: 20325b87-6858-49df-ab30-795a189dd2ac>]
>>> import json
>>> print json.dumps([p._info for p in ping_list], indent=4)
[
    {
        "project_id": "4fd17bd4ac834dcf8ba1236368f79986", 
        "id": "461e73e4-7f88-4c8f-bb1f-49df9ec18d84", 
        "alive": false
    }, 
    {
        "project_id": "4fd17bd4ac834dcf8ba1236368f79986", 
        "id": "a9060a07-d32a-4dcf-8387-1c7d69f897dc", 
        "alive": false
    }, 
    {
        "project_id": "4fd17bd4ac834dcf8ba1236368f79986", 
        "id": "20325b87-6858-49df-ab30-795a189dd2ac", 
        "alive": true
    }
]

As expected, nova fping reported that only selenium-1 (id=20325b87-6858-49df-ab30-795a189dd2ac) is really alive.

So, fping in nova is a fast and quite reliable way to check instance health. Like a phonendoscope, it cannot provide full information, but if a human doesn’t respire, he’s likely to be dead.

fping-phonendoscope

What’s New in Altai 1.0.2 from Maintainer’s Point of View

A new version of Altai Private Cloud for Developers 1.0.2 was released.

The new release is devoted to cleaning package dependencies. Also, a bunch of bugfixes was made, primarily to user interface. Let’s see what’s new in Altai 1.0.2 from maintainer’s point of view.

In previous releases, we had this motto: “Take basic CentOS/RHEL, take our source RPMs, and you will be able to build the whole Altai and install it”.
Altai RPMs (both source and binary) were grouped in two repositories: “main” and “deps”. “deps” were packages rebuilt from their third-party source RPMs without changes. All other packages went to “main”, including customized third-party software (like nginx with uploading module) and Altai proper packages like Focus web UI.
Since we built both “main” and “deps” packages, we signed them with Grid Dynamics signature.

This model had one pitfall: we had to maintain plenty of well-known packages that were not included into basic CentOS/RHEL, such as Rabbit MQ or Erlang. That made our repositories really tremendous: 500 MiB, 100 MiB for “main” and 400 for “deps”! Imagine how wasteful is add these tons of unchanged third-party packages to every release. That’s why we tried the following solution in the previous release (1.0.1): include a chain of repositories so almost all unchanged packaged are downloaded from 1.0.0 release and 1.0.1 repository contains only packages to upgrade. As it was shown in this article, YUM can handle thousands of repositories simultaneously without performance problems. So, the repository chain approach significantly saves space for newer releases, but it leads to some maintaining problems.

For example, imagine if one package should be downgraded in the next release. We can call yum downgrade package-name in Altai installer, but how could we guarantee that this packages will not be updated later accidentally by user in a yum update procedure?

A more complex problem is that it’s difficult to determine a list of all packages that belong to given release if they are spread between lots of repositories. Even more, build a new release repository being the next in the repo chain is a nontrivial task.

Fortunately, if you decide to use EPEL packages, you’ll say farewell to all these obstacles. First, the repository becomes significantly smaller just because now you haven’t to rebuild heaps of packages. Now we have only 160 MiB of binary packages. Second, with a small repository you haven’t to use cunning repository chain – everything becomes transparent and easy to support.

It’s worth to say that it using EPEL packages isn’t as simple as it seems to be. Some important Python libraries are installed in such places that you would have to patch your programs or they wouldn’t find their dependencies. We decided to reject these libraries and package them ourselves. Luckily, the most of EPEL packages were able to be used in Altai without complications.

As far as we reviewed all Altai packages, we chosen another repository layout. Let’s briefly describe it.

  • centos6: these packages are maintained and developed by Grid Dynamics team. This group contains customized OpenStack and a lot of home-grown packages signed by Altai team. Sources of these packages are available at GitHub.
  • deps: these packages are not a subject of Grid Dynamics development. This category includes the following subdirectories.
    1. centos6-updates – necessary update packages for CentOS 6 signed by CentOS.
    2. epel – necessary packages from EPEL repositories signed by EPEL.
    3. misc – packages built and signed by Altai team.
    4. misc-srpms – source RPMs for misc and signed by Altai team.

As you see, we still provide sources of all packages we’ve built as it’s appropriate for an open source project.

As it was mentioned above, we keep Altai sources in git. There are two steps between a git repository and a binary RPM. First, a source RPM must be built from a git repo. Second, a binary RPM is built from a source one.

Each step is a not-trivial operation. A source RPM must contain all information required for package build, including source tarball, spec file, and possibly patches that should be applied to unpacked tarball before build. ALT Linux team even developed a powerful toolkit called GEAR (Get Every Archive from git package Repository). GEAR contains tens of individual CLI programs for different purposes, including composing a source RPM from a git repository and importing a tarball to git. We used GEAR in previous releases, but the only feature we needed was git-to-source RPM conversion. Even more, almost every conversion was trivial because we develop software keeping in mind that they will be packaged to RPMs. GEAR, on its turn, allows to maintain third-party software that is under active or slow development and need to be patched before packaging.

Obviously, multifunctional GEAR led to boilerplate configuration files. That’s why we simplified git-to-source RPM conversion as in our case it could be done with a small and clear script. And there is now need to write GEAR rules file: it’s sufficient to just place a spec file to git repository.

Frankly speaking, the second step (source-to-binary RPM conversion) is trickier than the first, but, fortunately, there is a ready solution – mock tool used in Fedora and EPEL. mock prepares a clean and safe build chroot environment for build operation. We have already used mock for previous releases and we haven’t ceased to take its advantages.

So, Altai 1.0.2 is easy to develop, maintain, and support and in the same time more foolproof.

Hungry Process Breaks Your “while read” bash Cycle

Alessio Ababilov's Blog

I am working on a build system that makes easy to control several connected git repositories forming one project. This system is being written in bash and uses lots of rarely used git and bash features.

Often I have to iterate over a table usually generated by git, e.g., to see the changes between a commit and its parent, I run:

$ git diff-tree --no-commit-id -r 9b8b0f6150790d2a757cd2091ef91d3ebe9ce317 -- repos
:160000 160000 236fc8025f106375944457007f5a7a803297e683 f5ede37ddbf9eccd55012f1ddda3ae37259ca800 M	repos/altai/altai-installer
:160000 160000 2706a907bf2d136dd1f737e6c6cb4ca8e420329c 10a1bf5d8f716f30af089f1558eefbdeb07f9b3b M	repos/altai/nova-networks-ext
:160000 160000 7aaafb9f29b60ef0a4cf938b653de23354308be2 ad3725e92b08ca40cf65fb9ed604ae3285fee271 M	repos/altai/python-openstackclient-base
:160000 160000 748da9c4c1d058f96dd40ba328fd100719f768f7 eb568c5ffb4543b676208c96de7af2c62e455329 M	repos/openstack/glance

This output is easily parsed with bash’es while read:

$ git diff-tree --no-commit-id -r 9b8b0f6150790d2a757cd2091ef91d3ebe9ce317 -- repos | while read mode1 mode2 hash1 hash2 ignored path; do if [ "$mode2" == 160000 ]; then echo $path; fi; done
repos/altai/altai-installer
repos/altai/nova-networks-ext
repos/altai/python-openstackclient-base
repos/openstack/glance

read command get a line from stdin and sets variables one by one. We…

View original post 605 more words

OpenStack EPEL: the Dependency Purgatory

Alessio Ababilov's Blog

When you develop a software system, you can choose any solution between two extreme approaches.

  1. Build and maintain all your dependencies.
  2. Rely on external repositories and build only your specific packages.

Having chosen the first approach, you can be sure that your users will use your great tuned packages of carefully chosen versions that are doomed to work properly. And when you see a problem in a dependency, you freely patch it and… congratulations, now you are a happy maintainer of a zoo of numerous packages containing software written in several languages!

The second way is clear: you build a dozen of your own packages and publish a relatively small repository. And when your third-party dependencies become unavailable, it will be a user’s problem.

Developing Altai, we started from the first solution: a user installs basic RHEL/CentOS and just adds our repository. Nowadays, we are moving to the second…

View original post 1,148 more words

Billing plugin for Horizon

Now nova-billing has a Django-based web interface – horizon-billing (https://github.com/griddynamics/horizon-billing).

horizon-billing is packaged to a homonymous RPM (path for RHEL: http://yum.griddynamics.net/yum/diablo/, for CentOS: http://yum.griddynamics.net/yum/diablo-centos/).

To enable it in dashboard, install the horizon_billing package and turn it
on in /etc/openstack-dashboard/local/local_settings.py:

  • add ‘horizon_billing’ to INSTALLED_APPS tuple;
  • add ‘billing’ to ‘dashboards’ key in HORIZON_CONFIG.

After installation, a new “Billing” panel is added after “Project” and “Admin”.

Grid Dynamics’ GitHub repos are renamed

In order to unify our repo names with openstack GitHub origanization, “osc-robot-” prefix is removed.

So, these repos are renamed:

  • osc-robot-keystone;
  • osc-robot-nova;
  • osc-robot-openstackx;
  • osc-robot-glance;
  • osc-robot-swift;
  • osc-robot-openstack-compute;
  • osc-robot-noVNC;
  • osc-robot-python-novaclient.

Also, we add python-keystoneclient repo (forked from openstack/python-keystoneclient) that contains patches for RedHat packaging.

Testing framework

Today we would like to share with you one of our recent subprojects. It is our testing framework. We have used to automate our tests for OpenStack with Lettuce. However we found that we need more features to build robust framework with it. So we created a Bunch tool. The main reason for Bunch was the motivation to write more flexible and powerful tests with Lettuce. Key points to improve were:

  • Avoid hardcoded values in test scenarios. Lettuce offers the only way for data-driven scenarios. This is done via scenario outlines. What means that all data is stored in script itself. External data sources may be only supported by your own test code. We may fill that gap introducing Jinja2 templates and YAML configuration files. Feature templates stay readable but gain flexibility. The regular Lettuce scripts are generated on every test execution thus saving BDD style. Stories remain comprehensible for an end-user.
  • Write test fixtures explicitly. It should be possible to write setup and teardown BDD stories. It is also important to have behavior specifications for installation procedures.
  • Share and re-use test fixtures, specify that test depends on specific fixtures Tests should be self-suffucient and should not rely on the state created by other tests. However it is often a huge overhead when each test performs its own setup which may always be the same. It is required to provide the concept of “dependencies” between tests and test fixtures.
  • Tests are environment agnostic but test fixtures are environment dependent. We may enable writing environment agnostic tests by moving all environment specific action into test fixtures. Thus test fixtures may have multiple versions aimed for different environments.

So we gathered these requirements and planned the following features for implementation:

  1. Support for scenario templates. Scenarios become parameterizable and do not loose BDD spirit. Foreign Lettuce scenarios can be adopted by replacing hardcoded values with template variables. (YAML and Jinja2 were used for that)
  2. Explicit separation of test fixtures from tests (setup, teardown and test scripts). Test scenario should not contain any actions that are specific to platform/configuration and should only perform actions on the product under test.
  3. Dependency specification for setup/teardown fixtures and sharing fixtures among tests.  Most of tests rely on the same state while other may have specific prerequisites.
  4. Parallel test execution. This is often required for long running end-to-end scenarios. That feature introduces a requirement that which are planned to be executed in parallel should be independent from each other and does not use the same resources.

Points 1-3 are implemented now. That should be enough to start using it.  However there is a plenty of stuff planned on the road-map.

We also shared our OpenStack test suite which runs within our CI workflow. So you may have a look how Bunch works. Links to tool, doc and tests are below:

Just install Bunch, check out tests, adjust config.yaml and execute:
bunch -e clean openstack-core-test/smoketests/basic/ <resultdir>

Both Bunch and tests are under active development, so there are not much tests in the repo. This is just the beginning. Stay tuned.

Grid Dynamics Open Stack development activities

Hello everybody.

We want to share our roadmap with a community. In the nearest future Grid Dynamics are going to focus on new component and services development. All services will be open sourced.

What are we going to present soon:

  • Now we are developing resource accounting system for OpenStack (nova-billing).
  • We are developing DNS service, so every VM in a cloud will be resolved by its name and administrators of tenants will own one subdomain and will be able to add custom DNS records or subdomains.
  • We have created separate set of utilities called nova2ools and going to present them soon. They will offer utilities to work with nova and utilities to work with our services like billing or DNS.
  • We are going to start working on hardware provisioning for OpenStack. Sometimes it is very important to be able to work on real hardware. Ability to get hardware in a OpenStack way through REST API should be very useful.

To reflect this activities we have created a site with documentation and additional information: http://www.griddynamics.com/openstack.

How we build packages in Grid Dynamics (using Gear + Mock)

As we stated before we are building OpenStack RPM packages directly from Git repositories. All bug fixes and patches specific for our RedHat OpenStack distribution are committed into corresponding Git repos. The process of building is fully automated. You just need to setup a build system like we made.

Some details about our build system. All packages are built in a chroot, populated by Mock tool. It allows us to build packages in an isolated environment with minimal packages installed and only right repositories set for dependency resolution. For RHEL it is only repo from distribution ISO and Grid Dynamics repo itself. So to bootstrap we had to build many dependencies from sources.  That is not all the story. Having ready RPM specs and sources was not enough for good automated builds. It is also necessary to have a uniform mechanism for gathering everything into tarball: sources, patches, specs, scripts. We chose Gear as a tool for this purpose. It automates source code checkouts from Git and packs everything according to “gear-rules” which should be added to Git repo. We also improved Gear by adding integration with Mock, so we could produce a chroot-built SRPMs directly from Git. So basically commands to build Gear repo are the following:

  1. In the Git repo dir execute: gear-mock --root=<mock-chroot-config>
    (gear-mock is command introduced in our fork of Gear)
  2. Take SRPM and build it via Mock:  mock --rebuild <mock-chroot-config> <SRPM>

Summarizing benefits of using Mock and Gear:

  1. It is possible to build for different platforms (e.g. RedHat, CentOS, SL, Fedora)
  2. Built packages are independent of current host configuration
  3. Convenient source code updates (It is stored in Git, not tarballs)
  4. No more manual patch creation (Gear is able to generate .patch files from commits and put it into SRPM automatically)
  5. Uniform way to manage projects’ source code

RPM packages for Mock and Gear are also available at our YUM repository. You may try to do it yourself.