Posted by: John Bresnahan | January 16, 2013

OpenStack Glance And Nova Backed By Red Hat Storage

In this post I will explain how to configure Glance and Nova to be backed by Red Hat Storage (and thereby gluster).

The ISOs that I link to in this post require a Red Hat Network subscription.  The information here can be useful without it, but it is less convenient.

Deployment Details

In my deployment I use two virtual machines.  One for Red Hat Storage and the other for all of the OpenStack services (Keystone, Glance, and Nova).  Both VMs run on my Lenovo T530 laptop which runs Fedora 17.  I will not go into detail about how to create VMs because that is covered pretty well in other places, but I will touch on it just to make it as clear as possible.

Red Hat Storage VM

For the base VM I wanted a lot of space so I created a VM with 48GB of storage (10 should be plenty) and configured it to have 1GB of RAM and 1 CPU.

  1. Get Red Hat Storage by going here and clicking on Binary DVD.
  2. Create a VM with at least 8 GB of storage (mine had 48) and install Red Hat Storage by following the instructions here through step 7.1.
  3. Create a gluster volume with the following commands:
gluster volume create testvol myhost:/exp1
gluster volume start testvol
/etc/init.d/glusterd restart

The VM should now be ready to serve a gluster file system.

OpenStack VM

Here we will use a single VM for all of the OpenStack services.  The explanation of how to back either Glance or Nova by gluster will be the same for a distributed environment.

Create a VM with 8GB of storage using RHEL 6.4 (64 bit) by downloading the binary DVD available here

Install OpenStack by following the instructions here through step 6.

At this point you should have a VM which is running enough OpenStack services to upload VMs, launch them, and ssh into them.  The instructions referenced above should provide you with steps to verify this.

note: in selinux I had to run the following command in order to properly boot images:
setenforce permissive

You may also need to open up /etc/nova/nova.conf and make sure you have the line:

libvirt_type = qemu

Mount Gluster

The OpenStack VM now needs to mount the storage system provided by Red Hat Storage so that it can be used by Glance and Nova.  First the VM must be configured so that the recommend gluster client can be used.  The instructions for this are here, but I will put them in my own words.

Goto https://access.redhat.com/subscriptions/rhntransition and navigate: Subscriptions -> RHN Classic -> Registered Systems.

Find your system and click on it.  Click on ‘Alter Channel Subscriptions” on the new page.  Find Additional Services Channels for Red Hat Enterprise Linux 6 for x86_64  and expand it.  Select Red Hat Storage Native Client (RHEL Server 6 for x86_64) then click the Update Subscription button.

Now on the OpenStack machine run:

yum install glusterfs-fuse glusterfs
mkdir -p /mnt/gluster/
mount -t glusterfs <storage VM IP>:/testvol /mnt/gluster

At this point the OpenStack VM should be able to access the gluster file system.

Configure Glance

In order to change the path that Glance uses for its file system store only a single line in /etc/glance/glance-api.conf needs to be changed.

filesystem_store_datadir = /mnt/gluster/glance/images

Now run the following commands

mkdir -p /mnt/gluster/glance/images
chown -R glance:glance /mnt/gluster/glance/
create the directory for the instance store
mkdir /mnt/gluster/instance/
chown -R nova:nova  /mnt/gluster/instance/
service openstack-glance-api restart

At this point glance should be backed by the gluster file system.  Lets upload a file to glance and verify this.  A good test image is available here.

glance image-create --name="test" --is-public=true --container-format=ovf --disk-format=qcow2 < f16-x86_64-openstack-sda.qcow2
ls -l /mnt/gluster/glance/images

Configure Nova

The final step is to configure Nova such that nova-compute uses gluster for its instance store.  The instance store is the temporary area to which the VM is copied and then booted.  Just as it was with Glance, configuring nova to use gluster in this way is a simple one line file change.  Open the file /etc/nova/nova.conf and file the key instances_path.  Change the line to be the following:

instances_path = /mnt/gluster/instance

Now setup the correct paths and permissions and restart nova-compute.

mkdir  -p /mnt/gluster/instance/
chown -R nova:nova  /mnt/gluster/instance/
service openstack-nova-compute restart

That should be all that is needed.

Future Work

The idea behind this is that if Glance and Nova are backed by the same file system that image propagation should be much faster.  In the future I will be looking for a testbed where I can verify this.

Update

You should now be able to automatically mount on boot, add the following to you /etc/fstab file:

<gluster IP>:/glustervol /mnt/gluster glusterfs defaults,_netdev 0 0
About these ads

Responses

  1. What were the specific issues that required SELinux to be disabled?

    For mounting the gluster volume, perhaps show /etc/fstab (since mount -t won’t be persistent)

    And for using glusterfs to back Nova instances, does this mean that a single base image in /var/lib/nova/instances/_base would be shared among multiple compute nodes? (Side benefit of this is that only a single retrieval from glance is needed even if the same base image is launched on multiple Compute Nodes, right?)

    Finally… Does having a gluster backed Nova instance store allow you to do live migration of VMs or is that restricted to only when Nova/Cinder volumes are used for guest boot disks?

  2. There is a problem that prevents /etc/fstab from working on boot. gluster comes up after netfs. The problem is documented here:

    https://access.redhat.com/knowledge/solutions/101223

    There is a solution for Red Hat Storage build clients, but not for RHEL 6.4. Obviously this is something a real system would need because the file system needs to come up before nova-compute does or there will be errors. As a temporary work around you can add this line to /etc/init.d/openstack-keystone at line 95 (i will paste the line before it and after it for context):

    echo -n $”Starting $prog: ”
    mount -t glusterfs 192.168.50.223:/testvol /mnt/gluster/ &> /tmp/error_keystone_gluster.log
    daemon –user keystone –pidfile $pidfile “$exec –config-file $config &>/dev/null & echo \$! > $pidfile”

    Then reboot the system. When it comes back up run:

    yum install policycoreutils-python
    grep -i mount /var/log/audit/audit.log | audit2allow -M mountallow
    semodule -i mountallow.pp

    That should work.

  3. For those looking to mount Red Hat Storage from Fedora note this:

    http://www.gluster.org/2012/05/upgrading-to-glusterfs-3-3-0/

    1) GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade.

    Fedora 17 is running 3.2.7 and Red Hat Storage is running 3.3.0. The gluster client on Fedora would have to manually be upgraded, or the NFS client could be used.

  4. John,

    I think you have a typo in the following command:

    gluster volume create testvol:/exp1

    # gluster volume create testvol:/exp1
    Usage: volume create [stripe ] [replica ] [transport ] …

    The correct command, in this case, would be:

    # gluster volume create testvol myhost:/exp1
    Creation of volume testvol has been successful. Please start the volume to access data.

    I added myhost to /etc/hosts before running the command.

    I’ll let you know if I find anything else as I go through the instructions.

    Thanks,
    Marcelo

  5. oops sorry John i had Russel’s post open in another tab, thanks


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: