← Back to team overview

openstack team mailing list archive

Re: Using openstack to manage dedicated servers in a service provider setting


On 27 May 2013 11:02, Chris Bartels <chris@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> I had originally wanted to deploy full server sized KVM instances and rent
> VPS' that way, but it was brought to my attention that a certain market
> segment which I'm targeting- tech startups, who are testing apps on these
> rentals, are unable to get reliable metrics because of the software between
> their app & the hardware. So I've shifted gears to offering dedicated
> servers instead, to remove that layer of interference.
> Couldn't I re-flash the BIOS between each tenant to be sure there isn't any
> problem with it?

Unless you flash the BIOS with separate hardware (not by running the
flasher on the potentially compromised hardware itself), no. And even
then you'll need to be sure you flash every single EEPROM, not just
the system board BIOS, and you'll need to make sure you catch any that
have been toggled into readonly mode by an attacker and pull and
replace them. Note that a simple examination of device drivers /
system firmware won't necessarily cover every power on EEPROM in the
system :).

As for your tech startups, unless they are going to be running on bare
metal - e.g. their competitive advantage is going to be datacentre
operations efficiency - they are most likely going to be deploying on
a virtual substrate themselves. I would validate the proported
inability to get good metrics : give them a kvm instance with a
reserved machine, and the only noise will be kvm platform management
(vs other tenants). That should be able to deliver very robust (within
a few %) estimates of capacity and performance for nearly any
workload. The cases where it cannot - well, find those cases.

To do such a validation, I would pick a metric you think would be
distorted - e.g. IOPS - and find or write a bench test for it, then
use that from within the KVM instance on a machine (running with the
full machine, raw backing devices, etc) and then again from within the
machine with no KVM layer. For the metrics are invalid, you'll need to
obtain not just different results, but non-predictably different
results. E.g. consistently 30% would be a nuisance but still allow
prediction for behaviour on bare metal. But sometimes 1% slower and
sometimes 40% slower would make it much harder to use.

Robert Collins <rbtcollins@xxxxxx>
Distinguished Technologist
HP Cloud Services

Follow ups