← Back to team overview

openstack team mailing list archive

Re: Some insight into the number of instances Nova needs to spin up...

 

Bret Piatt wrote:
> We can break this down into a list of measureable test cases.  Many of these
> tests aren't just single host system tests.  They require an integrated
> deployment to find and eliminate the bottleneck.  I'm certain I'm missing
> additional items we would want to measure so consider this an off the top of
> my head incomplete list.
> 
> Rate of change tests:
> 1. Maximum rate of change per host machine -- this could be create, delete,
> migrate, snapshot, backup.
> 2. Maximum number of host machines per host controller machine that it can
> sustain at their maximum rate of change.
> 3. Maximum number of host machines per glance machine that it can sustain at
> their maximum rate of change.
> 4. Maximum number of requests per second and total buffer of requests on the
> message queue per machine.
> 5. Maximum number of storage volume operations per storage controller
> machine.
> 
> Scale tests:
> 1. Maximum number of VMs on a host machine.
> 2. Maximum number of VMs per host controller.
> 3. Maximum number of storage volumes attached to a host machine.
> 4. Maximum number of storage volumes per storage controller machine.
> 5. Maximum number of VM records in the cloud database.
> 6. Maximum number of storage volume records in the cloud database.
> 7. Maximum number of VM images managed per glance machine.
> 
> I understand that these maximum numbers will vary greatly depending on what
> "a machine", the size of the test VM image, the network configuration, and
> many other factors.  I propose we get a test bed setup to measure our own
> CloudMIPS (call it whatever we want) like a BogoMIPS so we can measure the
> performance of the codebase over time.  We can run the tests each weekend on
> the trunk as of 00:00 GMT Saturday and by tracking it against the code
> merged each week ensure we're headed in the right direction and if not we'll
> be consciously aware of which changes impacted the system performance.  I'm
> up for organizing the effort -- finding hardware, DC space, and getting
> operational support to manage it.  Is anyone interested in participating?

This is parallel to the effort Jay described at the summit, about doing
automated, Hudson-driven, continuous feature and performance testing on
trunk or user-submitted branches:

https://blueprints.launchpad.net/nova/+spec/testing-hudson-integration

We'll need all the hardware we can get to make that dream a reality, so
we should unite the efforts rather than fragmenting them. I hope we can
work on that for Cactus.

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack



References