openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #15877
Re: [openstack-dev] [nova] Disk attachment consistency
On Aug 14, 2012, at 10:00 PM, Chuck Thier <cthier@xxxxxxxxx> wrote:
> <snip>
> I could get behind this, and was was brought up by others in our group
> as a more feasible short term solution. I have a couple of concerns
> with this. It may cause just as much confusion if the api can't
> reliably determine which device a volume is attached to. I'm also
> curious as to how well this will work with Xen, and hope some of the
> citrix folks will chime in. From an api standpoint, I think it would
> be fine to make it optional, as any client that is using old api
> contract will still work as intended.
This will continue to work as well as it currently does with Xen. We can reliably determine which devices are known about and pass back the next one my patch under review below does that
>>
>> (review at https://review.openstack.org/#/c/10908/)
>>
>> <snip>
>> First question: should we return this magic path somewhere via the api? It would be pretty easy to have horizon generate it but it might be nice to have it show up. If we do return it, do we mangle the device to always show the consistent one, or do we return it as another parameter? guest_device perhaps?
>>
>> Second question: what should happen if someone specifies /dev/xvda against a kvm cloud or /dev/vda against a xen cloud?
>> I see two options:
>> a) automatically convert it to the right value and return it
>
> I thought that it already did this, but I would have to go back and
> double check. But it seemed like for xen at least, if you specify
> /dev/vda, Nova would change it to /dev/xvda.
That may be true, I believe for libvirt we just accept /dev/xvdc since it is just interpreted as a label.
>>
>
> <snip>
>
> Xen Server 6.0 has a limit of 16 virtual devices per guest instance.
> Experimentally it also expects those to be /dev/xvda - /dev/xvdp. You
> can't for example attach a device to /dev/xvdq, even if there are no
> other devices attached to the instance. If you attempt to do this,
> the volume will go in to the attaching state, fail to attach, and then
> fall back to the available state (This can be a bit confusing to new
> users who try to do so). Does anyone know if there are similar
> limitations for KVM?
There are no limitations like this AFAIK, however in KVM it is possible
to exhaust virtio minor device numbers by continually detaching and
attaching a device if the guest kernel < 3.2
>
> Also if you attempt to attach a volume to a deivce that already
> exists, it will silently fail and go back to available as well. In
> this new scheme should it fail like that, or should it attempt to
> attach it to the next available device, or error out? Perhaps a
> better question here, is for this initial consistency, is the goal to
> try to be consistent just when there is no device sent, or to also be
> consistent when the device is sent as well.
my review above addresses this by raising an error if you try to attach
to an existing device. I think this is preferrable: i.e. only do the
auto-assign if it is specifically requested.
>
> There was another idea, also brought up in our group. Would it be
> possible to add a call that would return a list of available devices
> to be attached to? In the case of Xen, it would return a list of
> devices /dev/xvda-p that were not used. In the case of KVM, it would
> just return the next available device name. At least in this case,
> user interfaces and command line tools could use this to validate the
> input the user provides (or auto generate the device to be used if the
> user doesn't select a device).
This is definitely a possibility, although it seems like a separate feature.
Vish
References