← Back to team overview

openstack team mailing list archive

Re: [GLANCE] Performance testing tool beta test

 

This is a really good idea. On a related note, I used swift-bench to see how my current configuration was doing. The latency for writes was higher than I expected but how do I know if it is reasonable or if there is some problem? It would be great if there were published benchmark numbers for swift-bench and things like ot, but they could only be produced by some one with a deep architectural understanding of Swift and a knowledge of what its performance tradeoffs are. Obviously it would also have to specify hardware specs and such to be useful.

Having such benchmarks would be good for other reasons as well. It is likely that users will have projects where performance is important and OpenStack is only one of the technologies that might be used. It is good to have as much useful performance information as possible to push OpenStack forward (assuming the performance is good :-) ).

 -David

On 1/17/2012 2:08 AM, Jesse Andrews wrote:
Nice!

Jay - are you expecting folks to run this on the same server or in the
same rack as the glance server?  (eg, do you expect the transfer
between the client and glance to make an noticeable impact on
performance)

Perhaps if people are going to share these numbers they should share
benchmarks on I/O (network - nuttcp between client/server and disk -
bonnie++ on storage systems that support it)?

J

On Mon, Jan 16, 2012 at 10:30 PM, Jay Pipes<jaypipes@xxxxxxxxx>  wrote:
Hi all,

If you're interested in helping me gather throughput numbers for
various Glance installations, please contact me. I wrote a little tool
tonight that gathers some throughput details after attempting to
concurrently add images to a Glance server.

You can see the output of the tool below:

jpipes@librebox:~/repos/tempest/tempest$ ./tools/glance-perf -v
--processes=4 --create-num=12 --create-size-min=20
Running ./tools/glance-perf against librebox
  PROCESSES: 4
  WORKLOAD: Create 12 images with size between 20MB and 100MB
  CREATING LOCAL IMAGE FILES (this may take a while)
  Creating local image file /tmp/tmpKVyclj/0.img of size 25MB ....... OK
  Creating local image file /tmp/tmpKVyclj/1.img of size 56MB ....... OK
  Creating local image file /tmp/tmpKVyclj/2.img of size 54MB ....... OK
  Creating local image file /tmp/tmpKVyclj/3.img of size 53MB ....... OK
  Creating local image file /tmp/tmpKVyclj/4.img of size 54MB ....... OK
  Creating local image file /tmp/tmpKVyclj/5.img of size 72MB ....... OK
  Creating local image file /tmp/tmpKVyclj/6.img of size 20MB ....... OK
  Creating local image file /tmp/tmpKVyclj/7.img of size 56MB ....... OK
  Creating local image file /tmp/tmpKVyclj/8.img of size 65MB ....... OK
  Creating local image file /tmp/tmpKVyclj/9.img of size 74MB ....... OK
  Creating local image file /tmp/tmpKVyclj/10.img of size 80MB ....... OK
  Creating local image file /tmp/tmpKVyclj/11.img of size 58MB ....... OK
  DONE CREATING IMAGE FILES
  CREATING WORKER POOL
  CREATING IMAGES VIA WORKERS
Total time to add images: 7.63540 sec
              throughput: 87.35624 MB/sec
  REMOVING LOCAL IMAGE FILES
  REMOVING IMAGES FROM GLANCE

Anyway, if you're interested, I'm looking for feedback on a number of things:

a) The types of workloads to add to the tool (currently tests
concurrent uploads of images, but obviously I want to test real-world
read/write workloads and am interested in getting feedback on what
kind of read/write ratios you see in production (# index vs. # add vs.
# delete, etc)
b) I've built the tool to create image files using output from
/dev/urandom of a configurable size (shown above is random image size
between 20M and 100M). I'd like to get some feedback on the size of
real-world images in use in production systems and then see what the
impact of image size is to throughput
c) If you have test systems that you wouldn't mind running my little
tool against, I'm interested in gathering feedback on the throughput
of the various Glance backend drivers -- Swift, S3, filesystem, RADOS,
etc

Lemme know by pinging me on email.

Cheers,
-jay

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



References