Posted by: John Bresnahan | May 5, 2016

Terraform Resource

Terraform is a powerful multi-cloud tool for laying out and managing infrastructure in a cloud.  Not only does it have many built in resources but it also has a framework for creating new resources.  This post will explain how that is done by taking an extremely simplified look at the load balancer resource for azure ARM.


Terraform resources roughly map to services offered by infrastructure clouds.  For example the following are resources:

  1. virtual machine
  2. load balancer
  3. subnet
  4. public IP address
  5. firewall

Resources are written in golang so readers should have a general understand of it, tho those that do not know go should not be deterred, prior to learning how to make terraform resources I did not know go either so there is no sophisticated use of golang here.  However an understanding of what terraform does and how to create configuration files for terraform will be needed to understand this post.

Creating a Resource

The following are the basic steps are how we create a resource in this post:

  1. Define the Configuration File
  2. Map the configuration to a the golang map[string]*Schema
  3. Stub CRUD functions
  4. Tie CRUD functions and schema together into a structure and register the structure with terraform
  5. Implement the logic in the CRUD functions

Define the Configuration File

The first thing to do is to decide what the schema should look like for your new resource.  In our example we want to create a resource that will have the following attributes:

  1. id: The handle that ARM assigns to this load balancer.
  2. name: The name that a user will assigns to the load balancer.
  3. type: The resource type.  This is just a string that is defined by Azure, don’t get tripped up here.
  4. resource_group_name: The name of the resource group which will contain this load balancer.
  5. location: The name of the cloud region where this load balancer will run.

The terraform configuration for this will look like this:

resource "azurerm_load_balancer" "blog" {
    name = "bloglb"
    location = "West US"
    resource_group_name = "${}"
    type = "Microsoft.Network/loadBalancers"

Note that ID is not set.  This is because it is an exported attribute that is intended to be consumed by other resources which will interact with the load balancer.

Represent Configuration In Data Structures

We must now write go code that will represent the terraform configuration.  This is done by creating a map[string]*Schema such that each key in the map is one of the keys in the configuration file and its value defines its type.  Here is what it looks like for our load balancer:

       "id": &schema.Schema{
              Type:     schema.TypeString,
              Computed: true,

       "name": &schema.Schema{
              Type:     schema.TypeString,
              Required: true,
              ForceNew: true,

       "type": &schema.Schema{
              Type:     schema.TypeString,
              Required: true,

       "resource_group_name": &schema.Schema{
              Type:     schema.TypeString,
              Required: true,

       "location": &schema.Schema{
              Type:      schema.TypeString,
              Required:  true,
              StateFunc: azureRMNormalizeLocation,

Stub Out The CRUD Functions

The next step is to implement the functions defined int schema.Resource .  The functions needed are CRUD (Create, Read, Update, Delete) functions for your resource and they have obvious semantics.  For now we will just stub them out:

func resourceArmLoadBalancerCreate(d *schema.ResourceData, meta interface{}) error {
       return nil

func resourceArmLoadBalancerRead(d *schema.ResourceData, meta interface{}) error {
       return nil

func resourceArmLoadBalancerUpdate(d *schema.ResourceData, meta interface{}) error {
       return nil

func resourceArmLoadBalancerDelete(d *schema.ResourceData, meta interface{}) error {
       return nil

Tying It Into Terraform

We now need to wrap everything up in a schema.Resource object and write a function which will create and return this object like so:

func resourceArmBasicLoadBalancer() *schema.Resource {
       return &schema.Resource{
              Create: resourceArmLoadBalancerCreate,
              Read:   resourceArmLoadBalancerRead,
              Update: resourceArmLoadBalancerUpdate,
              Delete: resourceArmLoadBalancerDelete,

              Schema: <above defined schema>

Once this is done the new resource is assigned a name and registered with the azurerm provider by adding the line:

"azurerm_load_balancer":          resourceArmBasicLoadBalancer(),

To the map here.

Implementing The CRUD Functions

This is where the rubber meets the road.  The basic job here is pull out the needed information from the configuration schema, use that information to interact with the cloud, then parse out any returned information and map it back into data structures that terraform can understand.

Lets start by looking at the Create function.  This function needs to pull out the input attributes, use those attributes to issue a create command to ARM, and then parse the response to that ARM command to determine the error, if there was one, or the ID of the resource if there was not.  The ID must be handed back to terraform for future use.

Attributes about the resource are pulled out of the function parameter d *schema.ResourceData.  Here is how the information that we need is accessed:

typ := d.Get("type").(string)
name := d.Get("name").(string)
location := d.Get("location").(string)
resourceGroup := d.Get("resource_group_name").(string)

We will also need the azure-sdk-for-go client object.  This is associated with the metadata structure here.  The following code will get that object:

lbClient := meta.(*ArmClient).loadBalancerClient

Now the job is to massage the data extracted from the schema.ResourceData and put it into data structures that the azure-sdk can understand and then call the CreateOrUpdate method.  The reply to that method is either an error or a response containing the ID of the newly created load balancer.  If it was successful the ID must be assigned to the terraform schema.ResourceData structure with the following call:


The full create method is show below:

func resourceArmLoadBalancerCreate(d *schema.ResourceData, meta interface{}) error {
       lbClient := meta.(*ArmClient).loadBalancerClient

       // first; fetch a bunch of fields:
       typ := d.Get("type").(string)
       name := d.Get("name").(string)
       location := d.Get("location").(string)
       resourceGroup := d.Get("resource_group_name").(string)

       loadBalancer := network.LoadBalancer{
              Name:       &name,
              Type:       &typ,
              Location:   &location,
              Properties: &network.LoadBalancerPropertiesFormat{},

       resp, err := lbClient.CreateOrUpdate(resourceGroup, name, loadBalancer)
       if err != nil {
              log.Printf("[resourceArmSimpleLb] ERROR LB got status %s", err.Error())
              return fmt.Errorf("Error issuing Azure ARM creation request for load balancer '%s': %s", name, err)

       return nil

The Read Operation

The read operation’s job is to query the resource for it current attributes and then returns those attributes to terraform in data structures that terraform can process.  In the create function we saw the ID returned to terraform via the special method “SetId”.  The read function will need to set all the generic attributes as well.

func resourceArmLoadBalancerRead(d *schema.ResourceData, meta interface{}) error {
       lbClient := meta.(*ArmClient).loadBalancerClient

       name := d.Get("name").(string)
       resGrp := d.Get("resource_group_name").(string)

       loadBalancer, err := lbClient.Get(resGrp, name, "")
       if err != nil {
              return fmt.Errorf("Error reading the state of the load balancer off Azure: %s", err)
       d.Set("location", loadBalancer.Location)
       d.Set("type", loadBalancer.Type)

       return nil

Just as in the create function, the read function acquires the azure-sdk-for-go client object from meta.  It also pulls out name of the load balancer and the resource group name from the resource data.  This information ultimately comes from the configuration file.  The load balancer is looked up.  If an error occurs it is returned.  If not the mutable attributes are back in the resource data.

Note: type and location are actually not mutable and resetting them is not actually needed for this resource but this shows how it would be done for resources that would need it.


The general flow is to pull out the attributes from the terraform structures, convert them to the data structures needed by whatever is being used to communicate with the cloud in question (in our example case this is azure-sdk-for-go), and finally to convert the response back into data structures that terraform can understand.

The complete source code for this resource can be found here.  A working configuration file can be found here.


Posted by: John Bresnahan | September 25, 2015

DCM Agent Scrubber

In my last post I talked about the need to sanitize a virtual machine (VM) instance before creating an image (especially a publicly available image) from it. The Dell Cloud Manager (DCM) agent team decided it was an important task and that it should be automated. To help do this we created the tool dcm-agent-scrubber (the scrubber).  This is a CLI that assists an owner of a VM instance to remove any dangerous files from being shared on a child image. It can search for and remove any RSA private keys, history files, system logs, cloud-init caches, and various other things. It also creates a recovery file which is an important detail discussed below. The source code for the scrubber can be found here.


When running the scrubber you can elect to create a recovery file.  This is very useful if after you create a child image from your parent VM instance you plan on using that parent instance again.  The scrubber may delete some files needed for that parent instance to properly run.  Thus once the child image creation has completed and the parent instance returns to processing errors may occur in the absence of those files.  Because of this before the scrubber deletes files it places them into a tarball.  Once it has completed removing files it encrypts the tarball using your public key[1].  While it is still best practice to remove that recovery tarball before the child instance is created, if it is not removed it is still safe because only someone with access to the matching private key can access the information inside of it.  Once the child image is created the recovery tarball can be decrypted with the owners matching private key and then untarred in the root directory of the parent instance and thereby reinstating everything that was removed.

Automation from DCM

Encrypting this recover file was particularly import to our use case.   DCM controls VMs running in IaaS clouds and can run an agent inside of those VMs for additional control [2].  When a DCM customer wishes to create an image of their instance we would like to make this as safe and convenient as possible and thus we would like to help them sanitize the image.  We can do this by sending a command to the agent telling it to run the scrubber on the customers image.  However, one of the things that the scrubber removes is a secret that the local agent needs to authenticate with DCM.  Therefore once scrubbed DCM central command may no long have that link to the agent.  The scrubber can create the recovery file but the agent may lose contact with DCM before the recovery file can be safely pull off the parent VM.  This is why the asymmetric encryption is critical for us.  We can create the recovery file and be assured that if it is burnt onto the child image that no secrets have been leaked.


  1. We do not actually encrypt the entire tarball with a public key.  Instead we create a symmetric that is used to encrypt the tarball and then encrypt that symmetric key with an RSA public key.  This is a common practice for large data streams.
  2. The exact architecture of DCM and the DCM agent will be discussed in a later post.
Posted by: John Bresnahan | September 2, 2015

Scrubbing Parent Images

This post is a reminder to scrub your images before making child images.  It is not sufficient to only think about doing this when making public images.  Today I ran into a situation where my child image did not properly boot due to stale data which was injected into the parent image at boot via cloud-init.

Configuring The Parent Instance For Child Image Creation

My goal was to launch a base VM, install custom software into it, and then create a child image from that VM.  I could then launch that child image and know the software I needed would be installed and running in it.

The easiest way for me to do this was to write a small bash script that installed the software and started the services which I needed.  I then launched a new VM with that script as user data.  From there cloud-init ran it and my software was installed.  Then I created a snapshot of that image for later use.  It was a good plan…

The Problem

To make a long a story short, I was using Ubuntu 10.04 which has cloud-init 0.5.10.  In its default configuration there the user data script was stored to the image in such a way that it would be automatically run every time that VM was booted.  Thus every time my child image was booted, the bash script I wrote to configure it strictly for its initial creation was run.  Sadly my script was not idempotent and when run a second time it removed software and configuration files but not until the services I wanted to run started.  Therefore I received some initial false positives that the run was successful only to later have everything fail.  This was all due to the fact that I did not properly sanitize my image before saving it.  Don’t live like me!

When making a snapshot of an image that was launched with user data and cloud-init please clean up the cloud-init logs and user data cache.  I did not and I lost almost an entire day trying to debug my image!

Posted by: John Bresnahan | March 7, 2014

OpenStack and Docker


I recently tried to set up OpenStack with docker as the hypervisor on a single node and I ran into mountains of trouble.  I tried with DevStack and entirely failed using both the master branch and stable/havana.  After much work I was able to launch container but the network was not right.  Ultimately I found a path that worked.  This post explains how I did this.

Create the base image

CentOS 6.5

The first step is to have a VM that can support this.  Because I was using RDO this needed to be a Red Hat derivative.  I originally chose a stock vagrant CentOS 6.5 VM.  I got everything set up and then ran out of disk space (many bad words were said).  Thus I used packer and the templates here to create a CentOS VM with 40GB of disk space.  I had to change the “disk_size” value under “builders” to something larger than 40000. Then I ran the build.

packer build template.json

When this completed I had a centos-6.5 vagrant box ready to boot.


I wanted to manage this VM with Vagrant and because OpenStack is fairly intolerant to HOST_IP changes I had to inject in an interface with a static IP address.  Below is the Vagrant file I used:

Vagrant.configure("2") do |config| = "centos-6.5-base" :private_network, ip: ""
   ES_OS_MEM = 3072
   ES_OS_CPUS = 1
   config.vm.hostname = "rdodocker"
  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", ES_OS_MEM]
    vb.customize ["modifyvm", :id, "--cpus", ES_OS_CPUS]

After running vagrant up to boot this VM I got the following error:

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/sbin/ifdown eth1 2> /dev/null
Stdout from the command:

Stderr from the command:

Thankfully this was easily solved.  I sshed into the VM with vagrant ssh, and then ran the following:

cat <<EOM | sudo tee /etc/sysconfig/network-scripts/ifcfg-eth1 >/dev/null

After that I exited from the ssh session and repackaged the VM with vagrant package –output I added the new box to vagrant and altered my Vagrant file to boot it.  I now had a base image on which I could install OpenStack.


Through much trial and error I came to the conclusion that I needed the icehouse development release of RDO.  Unfortunately this alone was not enough to properly handle docker.  I also had to install nova from the master branch into a python virtualenv and reconfigure the box to use that nova code.  This section has the specifics of what I did.

RDO Install

I followed the instructions for installing RDO that are here, only instead of running packstack –allinone I used a custom answer file.  I generated a template answer file with the command packstack –gen-answer-file=~/answers.txt.  Then I opened that file and I substituted out ever IP address with the IP address that vagrant was injecting into my VM (in my case this was  I also set the following:


This is very important.  The docker driver does not work with neutron.  (I learned this the hard way).  I then installed RDO with the command packstack –answerfile answers.txt.


Once RDO was installed and working (without docker) I set up the VM such that nova would use docker.  The instructions here are basically what I followed.  Here is my command set:

sudo yum -y install docker-io
sudo service docker start
sudo chkconfig docker on
sudo yum -y install docker-registry
sudo usermod -G docker nova
sudo service redis start
sudo chkconfig redis on
sudo service docker-registry start
sudo chkconfig docker-registry on

I edited the file /etc/sysconfig/docker-registry and add the following:

export SETTINGS_FLAVOR=openstack
export REGISTRY_PORT=5042 
. /root/keystonerc_admin 

Note that some of the values in that file were already set.  I removed those entires.

OpenStack for Docker Configuration

I changed this entry in  /etc/nova/nova.conf

compute_driver = docker.DockerDriver

this value (and uncommented it) in /etc/glance/glance-api.conf:

container_formats = ami,ari,aki,bare,ovf,docker

Hand Rolled Nova

Unfortunately docker does not work with the current release of RDO icehouse.  Therefore I had to get the latest code from the master branch of nova.  Further I had to install it.  To be safe I put it in its own python virtualenv.  In order to install nova like this a lot of dependencies must be installed.  Here is the yum command I used to install what I needed (and in some cases just wanted).

yum update
 yum install -y telnet git libxslt-devel libffi-devel python-virtualenv mysql-devel

Then I installed nova and its package specific dependencies into a virtualenv that I created.  The command sequence is below:

git clone
virtualenv --no-site-packages /usr/local/OPENSTACKVE
source /usr/local/OPENSTACKVE/bin/activate
pip install -r requirements.txt
pip install qpid-python
pip install mysql-python
python install

At this point I had an updated version of the nova software but it was running against an old version of the data base.  Thus I had to run:

nova-manage db sync

The final step was to change all of the nova startup scripts to point to the code in the virtualenv instead of the code installed to the system.  I did this by opening up every file at /etc/init.d/openstack-nova-* and changing the exec=”/usr/bin/nova-$suffix” line to exec=”/usr/local/OPENSTACKVE/bin/nova-$suffix”.  I then rebooted the VM and I was FINALLY all set to launch docker containers that I could ssh into!

Posted by: John Bresnahan | August 26, 2013

HTTP GET-outta here!

In the up coming Havana release of OpenStack Virtual Machine images can be downloaded a lot more efficiently.  This post will explain how to configure Glance and Nova so that VM images can be directly copied via the file system instead of routing all of the data over HTTP.

Historically How An Image Becomes a Law

Often times the architecture of an OpenStack deployment looks like the following:


In the above Glance and Nova-compute are both backed by the same file system.  Glance stores VM images available for boot on the same file system onto which Nova-compute downloads these images and uses them as an instance store.  Even tho the file system is the same, in previous releases of OpenStack a lot of unnecessary work must be done to retrieve the image.  The following steps were needed:

  1. Glance opened the file on disk and read it into user space memory.
  2. Glance marshaled the data in the image into the HTTP protocol
  3. The data was sent through the TCP stack
  4. The data was received through the TCP stack
  5. Nova compute marshaled the HTTP data into memory buffers.
  6. Nova compute sent the data buffers back to the file system.

That is a lot of unneeded memory copies and processing.  If HTTPS is used the processes is even more laborious.

Direct Copy

In the Havana release a couple of patches have made it so the file can be directly accessed, and thus all of the HTTP protocol processing is skipped.  The specifics involved with these patches can be found in the following links, but will otherwise not be discussed in this post.

Setting Up Glance

In order to make this process work the first thing that must be done is to describe the Glance file system store in a JSON document.  There are two mandatory pieces of information that must be determined.

  1. The mount point.  This is simply at what point the associated file system is mounted.  the command line tool df can help you determine this.
  2. A unique ID.  This ID is what Glance and Nova use to determine that they are talking about the same file system.This can be anything you like.  The only requirement is that it must match what is give to nova (described later).   You can use uuidgen to determine this value.

Once you have this information create a file that looks like the following:
"id": "b9d69795-5951-4cb0-bb5c-29491e1e2daf",
"mountpoint": "/"

Now edit your glance-api.conf file and make the following changes:
show_multiple_locations = True
filesystem_store_metadata_file =<path to the JSON file created above>

This tells Glance to expose direct URLs to clients and to associate the meta data described in your JSON file with all URLs that come from the file system store.

Note that this metadata will only apply to new images.  Anything that was in the store prior to this configuration change will not have the associated metadata.

Setting Up Nova Compute

Now we must configure Nova Compute to make use of the new information that Glance is exposing.  Edit nova.conf and add the folowing values:




This tells Nova to use direct URLs if they have the file:// scheme and they are advertised with the id b9d69795-5951-4cb0-bb5c-29491e1e2daf.  It also sets where Nova has this file system mounted.  This may be different from what Glance has it mounted, if so the correct path will be calculated. For example, if glance has the file system mounted at /mnt/gluster, and nova has it mounted at /opt/gluster the paths with which each accesses the file will be different. However, because Glance tells nova where it has it mounted, Nova can create the correct path.

Verify the Configuration

To verify that things are setup correctly do the following:

Add a new image to Glance

$ wget
$ glance image-create --file fedora-19.x86_64.qcow2 --name fedora-19.x86_64.qcow2 --disk-format qcow2 --container-format bare --progress
[=============================>] 100%
| Property         | Value                                |
| checksum         | 9ff360edd3b3f1fc035205f63a58ec3e     |
| container_format | bare                                 |
| created_at       | 2013-08-26T20:23:12                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 461bf150-4d41-47be-967f-3b4dbafd7fa5 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | fedora-19.x86_64.qcow2               |
| owner            | 96bd6038d1e4404e83ad12108cad7029     |
| protected        | False                                |
| size             | 237371392                            |
| status           | active                               |
| updated_at       | 2013-08-26T20:23:16                  |

Verify that Glance is exposing the direct URL information

$ glance –os-image-api-version 2 image-show 461bf150-4d41-47be-967f-3b4dbafd7fa5

| id | 461bf150-4d41-47be-967f-3b4dbafd7fa5 |
| locations | [{u'url': u'file:///opt/stack/data/glance/images/461bf150-4d41-47be-967f-3b4dbafd7fa5', u'metadata': {u'mountpoint': u'/', u'id': u'b9d69795-5951-4cb0-bb5c-29491e1e2daf'}}] |

Make sure that

Make sure that the metadata above matches what you put in the JSON file.

Boot the image

$ nova boot --image 461bf150-4d41-47be-967f-3b4dbafd7fa5 --flavor 2 testcopy
| Property                             | Value                                |
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | fedora-19.x86_64.qcow2               |
| OS-EXT-STS:vm_state                  | building                             |

Verify the copy in logs

The image should successfully boot. However we not only want to know it booted, we also want to know that it propagated with a direct file copy.  The easiest way to verify this is by looking nova compute’s logs.  Look for the string Successfully transferred using file.   Lines like the following should be found:

2013-08-26 19:39:38.170 INFO [-] Copied /opt/stack/data/glance/images/70746c77-5625-41ff-a3f8-a3dfb35d33e5 using < object at 0x416f410>
2013-08-26 19:39:38.172 INFO nova.image.glance [req-20b0ce76-1f70-482b-a72b-382621e9c8f9 admin admin] Successfully transferred using file

If these lines are found, then congratulations, you just made your system more efficient.

Posted by: John Bresnahan | August 14, 2013

Preparing Fedora 19 for OpenStack Glance Development

For development of Glance I have recently been using the publicly available Fedora 19 VM which RDO made available here.  From time to time I have found that I need to boot the VM clean (eg: verifying that my environment is not influencing recent changes etc).  In this brief post I will describe how to prepare that clean VM instance with all the dependencies needed for Glance development.

Installing the Dependencies

Run the following to install all the base deps:

sudo yum update -y
sudo yum install git vim gcc postgresql-devel mariadb-devel python-virtualenv libffi-devel libxslt-devel

Install Glance Into A Virtualenv

Now run the following to create a python virtual environment and install glance and its deps into it.

git clone git://
virtualenv --no-site-packages VE
source VE/bin/activate
cd glance/
pip install -r requirements.txt
pip install -r test-requirements.txt
python develop
./ -N

At this point you have a configured python virutalenv with Glance installed into it and you are ready to start developing Glance!

Posted by: John Bresnahan | August 7, 2013

Download Modules In Nova

A patch has recently been accepted into Nova that allows for images to be downloaded in user defined ways.  In the past images could only be fetched via the Glance REST API but now it is possible to fetch images via customized protocols (think bittorrent and direct access to swift).

Loading Modules

In nova.conf there is the option allowed_direct_url_schemes. This is a list of strings that tell nova what download modules should be imported at load time.

Download modules are loaded via stevedore.  Each download module must be registered with Python as an entry point.  The name space must be and the name must match the strings in the allowed_direct_url_schemes list.

When Nova-compute starts it will walk the list of values in allowed_direct_url_schemes and for each one it will ask stevedore to return the module associated with that name under the name space.  Nova will them make a call into that module (<module>.get_schemes()) to get a list of URL schemes for which the module can be used.  The module will then be associated with each of the strings returned by get_schemes() in a look-up table.

Later, when Nova attempts to download an image it will ask Glance for a list of direct URLs where that image can be found.  It will then walk that list to see if any of the URL schemes can be found in the download module look-up table.  If so it will be used for that download.

A File Example.

For example, here is the setup.cfg for the file download module:

[entry_points] =
file =

This sets up the module as an entry point.

When allowed_direct_url_schemes = file Nova will ask stevedore for the entry point.  Stevedore will return the module  Nova will then call  This call will return the list of strings file and filesystem.  Two entries will then be added to the download module look-up table:

download_modules['file'] =
download_modules['filesystem'] =

When booting, if Glance returns a direct URL of file:///var/lib/glance/images/fedora19.qcow2 Nova will look up file in the download_modules table and thereby get the module for use in the download (I feel like I have said download too many times.  download).

The Interface

Download modules are python modules that must have the following interface functions:

def get_download_hander(**kwargs):
    returns a
def get_schemes():
    returns a list of strings representing schemes

The meat of the work is done by an implementation of  This object only needs to have a single method:

download(self, url_parts, dst_path, metadata, **kwargs):

It is called with the URL to download (already parsed with urlparse), the path to the local destination file, and metadata describing the transfer (this last one will be described in a later post).  When it returns Nova will assume that the data has been downloaded to dst_path.  If anything goes wrong one of the following errors should be raised:


Posted by: John Bresnahan | July 29, 2013

RDO on Fedora 19

A few months back I installed RDO on a Fedora 18 and it just worked!  Unfortunately I ran into a few blockers when trying to install it on Fedora 19.  In this post I will describe my experiences of setting up RDO on Fedora 19 and how I made it work.


Here is the set of commands in order that I used to install RDO on Fedora 19.  An explaination of why each was done can be found in the later sections of the post

  1. yum update -y
  2. yum install -y
  3. yum install -y python-django14
  4. yum install -y ruby
  5. yum install -y openstack-packstack
  6. Fix nova_compute.pp
    1. Open /usr/lib/python2.7/site-packages/packstack/puppet/templates/nova_compute.pp
    2. Delete lines 40 – 45 (6 lines total)
  7. adduser jbresnah
  8. passwd jbresnah
  9. usermod -G wheel jbresnah
  10. su – jbresnah
  11. packstack –allinone –os-quantum-install=n
  12. sed -i ‘s/cluster-mechanism/ha-mechanism/’ /etc/qpidd.conf
  13. reboot


RDO is Red Hat’s community distribution of OpenStack.  The quick start for installing it can be found here.   The documentation there will be the most authoritative source as it will be kept up to date and this blog post will likely not.  That said,these are the steps I had to do in order to get the present day version of Fedora 19 to work with the current packstack.

Update the system, install initial dependencies and create a user

  1. yum update -y
  2. yum install -y

Not e that step #2 is includes the correct path to the Hanava release, which is different from the RDO instructions which point at the Grizzly release.


Bug 1: puppet/util/command_line

Before continue with the installation instructions note that there are a couple of bugs to work around. The first manifests itself with the following error message:

2013-07-28 07:09:29::ERROR::ospluginutils::144::root:: LoadError: no such file to load -- puppet/util/command_line
require at org/jruby/
require at /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:51
(root) at /usr/bin/puppet:3

Details on this bug can be found here: 961915 and 986154.

The work around is easy, simply install ruby with the following command:

yum install -y ruby

Bug 2: kvm.modules

The second bug results in this error:
ERROR : Error during puppet run : Error: /bin/sh: /etc/sysconfig/modules/kvm.modules: No such file or directory

Details on it can be found here.  To work around it do the following as root:

  1. Open /usr/lib/python2.7/site-packages/packstack/puppet/templates/nova_compute.pp
  2. Delete lines 40 – 45 (6 lines total)

Bug 3: django14

Another bug detailed here requires that you explicitly install django14.
yum install python-django14

Bug 4: qpidd

The file /etc/qpidd.cong has an invalid value in it.    To correct this run the following:

sed -i 's/cluster-mechanism/ha-mechanism/' /etc/qpidd.conf

Install OpenStack with packstack

At this point we are ready to install with packstack.  You need to run packstack as a non-root user:

su - jbresnah
packstack --allinone --os-quantum-install=n

Once this completes OpenStack should be installed and all set.  From there proceed with the instructions in the RDO Quick Start starting here.

Note: Running RDO in a Fedora 19 VM on Fedora 19

I tried this out in a VM and had some trouble.  My host OS is Fedora 19 and the VM in which I was installing RDO was also Fedora 19.   The problem is that on both the host and the guest the IP address for the network bridge was the same:  This ultimately cause a failure in packstack:

ERROR : Error during puppet run : Error: /usr/sbin/tuned-adm profile virtual-host returned 2 instead of one of [0]

The solution is to edit the file (on the guest VM) /etc/libvirt/qemu/networks/default.xml and chance the ip address and dhcp range to a different IP addess, i used the following:

<bridge name="virbr0" />
<mac address='52:54:00:28:8E:A9'/>
<ip address="" netmask="">
<range start="" end="" />

The re-run packstack with the answer file that was left in the users home directory:

packstack --answer-file=packstack-answers-20130729-111751.txt

Posted by: John Bresnahan | July 27, 2013

Hacking with devstack in a VM

When I first started working on OpenStack I spent too much time trying to find a good development environment. I even started a series of blog posts (that I never finished) talking about how I rolled my own development environments. I had messed around with devstack a bit, but I had a hard time configuring and using it in a way that was comfortable for me. I have since rectified that problem and I now have a methodology for using devstack that works very and saves me much time. I describe that in detail in this post.

Run devstack in a VM

For whatever reason, there may be a temptation may be to run devstack locally (easier access to local files, IDE, debugger, etc), resist this temptation.  Put devstack in a VM.

Here are the steps I recommend for your devstack VM

  1. Create a fedora 18 qcow2 base VM
  2. Create a child VM from that base
  3. Run devstack in the child VM

In this way you always have a fresh, untouched fedora 18 VM (the base image) around for creating new child images which can be used for additional disposable devstack VMs or whatever else.

Create the base VM

The following commands will create a Fedora 18 base VM.

qemu-img create -f qcow2 -o preallocation=metadata devstack_base.qcow2 12G
virt-install --connect qemu:///system -n devstack_install -r 1024 --disk path=`pwd`/devstack_base.qcow2 -c `pwd`/Fedora-18-x86_64-Live-Desktop.iso --vnc --accelerate --hvm

At this point a window will open that looks like this:


Simply follow the installation wizard just as you would when installing Fedora on bare metal.  When partitioning the disks make sure that the swap space is not bigger than 1GB   Once it is complete run the following commands:

sudo virsh -c qemu:///system undefine devstack_install
sudo chown <username>:<username> devstack.qcow2

Import into virt-manager

From here on I recommend using the GUI program virt-manager.  It is certainly possible to do everything from the command line but virt-manager will make it a lot easier.

Find the Create a new virtual machine button (shown below) and click it:virt-manager_new_vm_circle

This will open a wizard that will walk you through the process.  In the first dialog click on Import existing  disk image as shown below


Once this is complete run your base VM.  At this point you will need to complete the Fedora installation wizard.

Before you can ssh into this VM you need to determine its IP address and enable sshd in it.  To do this log in via the display and get a shell.  Run the following commands as root:

sudo service sshd enable
sudo  service sshd start

You can determine the IP address by running ifconfig. In the sample session below note that the VMs IP address is


ssh into the VM and install commonly used software with the following commands:

ssh root@
yum update -y
yum install -y python-netaddr git python-devel python-virtualenv telnet
yum groupinstall 'Development Tools'

You now have a usable base VM. Shut it down.

Create the Child VM

The VM we created above will serve as a solid clean base upon which you can create disposable devstack VMs quickly.  In this way you will know that your environment is always clean.

Create the child VM from the base:

qemu-img create -b devstack_base.qcow2 -f qcow2  devstack.qcow2

Again import this child into virt-manager.  Configure it with at least 2GB of RAM. When you get to the final screen you have to take additional steps to make sure that virt-manager knows this is a qcow2 image.   Make sure that the Customize configuration before install option is selected (as shown below) and the click on Finish.


In the next window find Disk 1 on the left hand side and click it.  Then on the right hand side find Storage format and make sure that qcow2 is select.  An example screen is below:


Now click on Begin Installation and your child VM will boot up.  Just as you did with the base VM, determine its IP address and log in from a host shell.

 Install devstack

Once you have have the VM running ssh into it.  devstack cannot be installed as root so be sure to add a user that has sudo privileges.  Then log into that users account and run the following commands (note that the first 2 commands are working around a problem that I have observed with tgtd).

mv /etc/tgt/conf.d/sample.conf /etc/tgt/conf.d/sample.conf.back
service tgtd restart
git clone git://
cd devstack

devstack will now ask for a bunch of passwords, just click enter for them all and wait (a long time) for the script to finish.  When it ends you should see something like the following:

Horizon is now available at
Keystone is serving at
Examples on using novaclient command line is in
The default users are: admin and demo
The password: 5b63703f25be4225a725
This is your host ip: completed in 172 seconds.

Hacking With devstack

If the above was painful do not fret, you never have to do it again.  You may choose to create more child VMs, but for the most part you can use your single devstack VM over and over.

Checkout devstack

In order to run devstack commands you have to first set some environment variables.  Fortunately devstack has a very convenient script for this named openrc. You can source it as the admin user or the demo user.  Here is an example of setting up an environment for using OpenStack sell commands as the admin user and admin tenant:

. openrc admin admin

it is that easy!  Now lets run a few OpenStack commands to make sure it works:

[jbresnah@localhost devstack]$ . openrc admin admin
[jbresnah@localhost devstack]$ glance image-list
| ID                                   | Name                            | Disk Format | Container Format | Size     | Status |
| a3e245c2-c8fa-4885-9b2e-2fc2e5f358a1 | cirros-0.3.1-x86_64-uec         | ami         | ami              | 25165824 | active |
| e6554b2a-cc75-42bf-8278-e3fc3f97501b | cirros-0.3.1-x86_64-uec-kernel  | aki         | aki              | 4955792  | active |
| f2750476-4125-46f1-8339-f94140c40ba3 | cirros-0.3.1-x86_64-uec-ramdisk | ari         | ari              | 3714968  | active |
[jbresnah@localhost devstack]$ glance image-show cirros-0.3.1-x86_64-uec
| Property              | Value                                |
| Property 'kernel_id'  | e6554b2a-cc75-42bf-8278-e3fc3f97501b |
| Property 'ramdisk_id' | f2750476-4125-46f1-8339-f94140c40ba3 |
| checksum              | f8a2eeee2dc65b3d9b6e63678955bd83     |
| container_format      | ami                                  |
| created_at            | 2013-07-26T23:20:18                  |
| deleted               | False                                |
| disk_format           | ami                                  |
| id                    | a3e245c2-c8fa-4885-9b2e-2fc2e5f358a1 |
| is_public             | True                                 |
| min_disk              | 0                                    |
| min_ram               | 0                                    |
| name                  | cirros-0.3.1-x86_64-uec              |
| owner                 | 6f4bdfaac28349b6b8087f51ff963cd5     |
| protected             | False                                |
| size                  | 25165824                             |
| status                | active                               |
| updated_at            | 2013-07-26T23:20:18                  |
[jbresnah@localhost devstack]$ glance image-download --file local.img a3e245c2-c8fa-4885-9b2e-2fc2e5f358a1
[jbresnah@localhost devstack]$ ls -l local.img
-rw-rw-r--. 1 jbresnah jbresnah 25165824 Jul 26 19:29 local.img

In the above session we listed all of the images that are registered with Glance, got specific details on one of them, and then downloaded it.  At this point you can play with the other OpenStack clients and components as well.


devstack runs all of the OpenStack components under screen.  You can attach to the screen session by running:

screen -r

You should now see something like the following:


notice all of the entries on the bottom screen toolbar.  Each one of these is a session running an OpenStack service.  The output is a log from that service.  To toggle through them hit <ctrl+a+space>.

Making a Code Change

In any given screen session you can hit <ctrl+c> to kill a service, and then <up arrow> <enter> to restart it.  The current directory is the home directory of the python source code as if you had checked it out from git.  You can make changes in that directory and you do not need to install them in any way.  Simply kill the service (<ctrl+c>), make your change, and then restart it (<up arrow>.<enter>).

In the following screen cast you see me do the following:

  1. Connect to the devcast screen session
  2. toggle over to the glance-api session
  3. kill the session
  4. alter the configuration
  5. make a small code change
  6. restart the service
  7. verify the change
Posted by: John Bresnahan | July 15, 2013

Quality Of Service In OpenStack

In this post I will be exploring the current state of quality of service (QoS) in OpenStack.  I will be looking at both what is possible now and what is on the horizon and targeted for the Havana release.  Note that I am truly only intimately familiar with Glance and thus part of the intention of this post is to gather information from the community.  Please let me know what I have missed, what I have gotten incorrect, and what else might be out there.


The term quality of service traditionally refers to the users reservation, or guarantee of a certain amount of network bandwidth.  Instead of letting current network traffic and TCP flow control and back off algorithms dictate the rate of a users transfer across a network, the user would request N bits/second over a period of time.  If the request is granted the user could expect to have that amount of bandwidth at their disposal.  It is quite similar to resource reservation.

When considering quality of service in OpenStack we really should look beyond networks and at all of the resources on which there is contention, the most important of which are:

  • CPU
  • Memory
  • Disk IO
  • Network IO
  • System bus

Let us take a look at QoS in some of the prominent OpenStack components.

Keystone and Quotas

While quotas are quite different from QoS they do have some overlapping concepts and thus will be discussed here briefly.  A quota is a set maximum amount of a resource that a user is allowed to use.  This does not necessarily mean that the user is guaranteed that much of the given resource, it just means that is the most they can have.  That said quotas can sometimes be manipulated to provide a type of QoS (ex: set a bandwidth quota to 50% of your network resources per user and then only allow two users at a time).

Currently there is an effort in the keystone community to add centralized quota management for all OpenStack components to keystone.  Keystone will provide management interfaces to the quota information.  When a user attempts to use a resource OpenStack components will query Keystone for the particular resource’s quota.  Enforcement of the quota will be done by that OpenStack service, not by Keystone.

The design for quota management in keystone seems fairly complete and is described here.  The implementation does not appear to be targeted for the Havana release but hopefully we will see it some time in the I cycle.  Note that once this is in Keystone the other OpenStack components must be modified to use it so it will likely be some time before this is available across OpenStack.


Glance is the image registry and delivery component of OpenStack.  The main resources that it uses is network bandwidth when uploading/downloading images and the storage capacity of backend storage systems (like swift and GlusterFS).  A user of Glance may wish to get a guarantee from the server that when it starts uploading or downloading an image that server will deliver N bits/second.  In order to achieve this Glance does not only have to reserve bandwidth on the workers NIC and the local network, but it also has to get a similar QoS guarantee from the storage system which houses its data (swift, GlusterFS, etc).

Current State

Glance provides no first class QoS features.  There is no way at all for a client to negotiate or discover the amount of bandwidth which can be dedicated to them.  Even using outside OS level services to work around this issue is unlikely.  The main problem is reserving the end to end path (from the network all the way through to the storage system).

Looking forward

In my opinion the solution to adding QoS to Glance is to get Glance out of the Image delivery business.  Efforts are well underway (and should be available in the Havana release) to expose the underlying physical locations of a given image (things like http:// and swift://).  In this way the user can negotiate directly with the storage system for some level of QoS, or it can Staccato to handle the transfer for it.


QoS for Cinder appears to be underway for the Havana release.  Users of Cinder can ask for a specific volume type.  Part of that volume type is a string that defines the QoS of the volume IO (fast, normal, or slow).  Backends that can handle all of the demands of the volume type become candidates for scheduling.

More information about QoS in cinder can be found in the following links:


Neutron (formerly known as Quantum) provides network connectivity as a service.  A blueprint for QoS in Neutron can be found here and additional information can be found here.

This effort is targeted for the Havana release.  In the presence of Neutron plugins that support QoS (Cisco, Nicira, ?) this will allow users reservation of network bandwidth.


In nova all of the resources in the above list are used.  User VMs necessarily use some amount of CPU, memory, IO, and network resources. Users truly interested in a guaranteed level of quality of service need a way to pin all of those resources.  An effort for this in Nova is documented here with this blueprint.

While this effort appear to be what is needed in Nova it is unfortunately quite old and currently marked as obsolete.  However the effort seems to have new life recently as shown by this email exchange. A definition of work can be found here with the blueprint here.

This effort will operate similarly to how Cinder is proposing QoS. A set of string will be defined: High (1 vCPU per CPU), Normal (2 vCPUs per CPU), low (4 vCPUs per CPU).  This type string would then be added as part of the instance type when requesting a new VM instance.  Memory commitment is not addressed in this effort, nor is network and disk IO (however those are best handled by Neutron and
Cinder respectively).

Unfortunately nothing seems to be scheduled for Havana.

Current State

Currently in nova there is the following configuration option:

# cpu_allocation_ratio=16.0

This sets the ratio of virtual CPUs to physical CPUs.  If this value is set to 1.0 then the user will know that the number of CPUs in its requested instance type maps to full system CPUs.  Similarly there is:

# ram_allocation_ratio=1.5

which does the same thing for RAM.  While these do give a notion of QoS to the user they are too coarsely grained and can be inefficient when considering users that do not need/want such QoS.


Swift does not have any explicit QoS options.  However it does have a rate limiting middleware which provides a sort of quota on bandwidth for users.  How to set these values can be found here.

Older Posts »