Posted by: John Bresnahan | March 6, 2013

A Look At Performance When Glance Is Backed By Gluster

In a previous post I described how to configure OpenStack Nova and Glance to be backed by Red Hat Storage (Gluster).  Here I will push on that thought a bit by looking at the performance of Glance when backed by Gluster in a couple of different configurations.

Backing Glance by Gluster has a few advantages

  1. High Availability.  Gluster can be configured with various levels of redundancy and replication. By having Glance store its data in Gluster these features are passed up to Glance.
  2. Glance Server Horizontal Scaling.  Because Gluster is a shared file system many Glance servers can mount it and have access to the same files and namespace.  Therefore many Glance services can sit behind a virtual IP or a load balancer or DNSRR, and thus be able to handle higher client loads.
  3. When configured in conjunction with nova, the overhead of image propagation can be greatly reduced.

However, before jumping to too many conclusions about the miracles of this dynamic duo I decided to look at some of the potential costs.  Specifically I am looking at the performance effects.

I got my hands on 4 beefy machines (64GB of RAM and 64 processors each!).  I configured two of them to be Gluster storage bricks, another to be a Glance server, and the final one to be a Glance client.

Baseline Numbers

Network

The first thing I wanted to do was get an idea of the performance of the local file system and the network in order to establish a high water mark.  I used iperf to measure the TCP network speed between all 4 machines.  After several trials the results were consistently between 940 and 942 megabits per second (this is a common results for untuned gigabit ethernet).

Local Disk

For the local disk I created a 1GB file with the following command:

dd if=/dev/urandom of=1GBtestfile count=1024 bs=1048576

I then measured the time it took to copy that file to another location on that file system.  Due to the fact that most of data stayed in kernel memory buffers the results were quite fast.  To mitigate the effects of caching I also measured the time it took to do the copy followed by a sync.  The average of 10 trials is show below:

disk_speed1

Without sync the average is almost 6 gigabits/second.  With the sync it is 818 megabits/second.

Gluster FS

On the Gluster storage nodes I set up two volumes, one distributed and the other replicated.  I added the same 1GB file used above to each of the volumes.  Then, for each volume type I performed two tests:

  1. Copy the file from Gluster to local storage
  2. Copy the file from the Gluster volume to the same Gluster volume

I did this when both syncing the data and not syncing the data, the results are below.

gfs_speed1

Glance

By looking at the above results we can begin to guess how a Gluster backed Glance service would perform in this set up.  Glance will copy a file out of Gluster, then stream it over HTTP to the client where it will be written to local disk.  Thus the performance should be at best the same as the GFS to local cases shown above.  In the next experiment Glance was configured with a local disk, and with Gluster (both volume types).  The 1 GB file was uploaded to Glance.  The time it took to download the file in all cases was then measured.  The average of 10 trials is shown below.

glance_speed1

As shown, when Glance is backed by Gluster there is a performance hit, but it is fairly small when considered against the feature set that Gluster offers Glance.  Not to mention the fact that we are not yet looking at replicated Glance services against the same shared Gluster file system (see Future Work).  In the without sync case a local disk is about 20% faster, in the with sync case local disk is about 15% faster.

Now lets look at the overhead added by Glance.  The following graphs compare a copy from Gluster to the local disk with a download of Glance backed by Gluster.  The difference is the overhead added by Glance.

glance_overhead

I thought some might find it convenient to see all the results in one place, so here is a final graph showing that (note that Local Disk in the without sync case is off the chart.  The same scale was used to better display the more interesting results):

all_results1

Future Work

This is a very simple look at the performance of the integration of the two systems.  It is just a single client, against a single Glance service backed by a small Gluster cluster.  In the future it would be useful to study the effects of a heavy load of simultaneous clients all hitting an increasing number of Glance servers backed by a slightly larger Red Hat storage cluster.  In these circumstances I think it will show some serious advantages to this set up.  Hopefully I will someday have the resources to study this and that this first post will provide some context to that study.

I also hope to study the effects of copy a file directly out of glance and to nova thus eliminating the overhead introduced by Glance.

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: