yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #85159
[Bug 1864279] Re: Unable to attach more than 6 scsi volumes
FYI, for Bionic this isn't a trivial backport.
The change as it was added later would need at least
commit 932862b8 conf: Rework and rename virDomainDeviceFindControllerModel
commit 6ae6ffd8 qemu: Introduce qemuDomainFindSCSIControllerModel
Maybe more.
It is not un-doable, but it gets into an area where slowly but surely the regression risk might out weight the benefit. 18.04 is out for almowst 3 years now, coming up only now indicates it can't be the most evil issue possible. Many (but sadly not all) use cases have workarounds by specifying explicit IDs.
And finally - at least for the case reported here fixing it in Bionic wouldn't even help as you want/need it for UCA based on later versions.
On the bright side, the versions in UCA that you ask for have those
reworks applied. There the code should match more easily.
i'm willing to give this a deeper look IF someone actually cares about
this in Bionic (not Bionuc-UCA-*). So I'll set the bug task there to
Won't fix and would ask anyone affected to update the bug to make an
argument for why we really need it (that reasoning will be needed for an
SRU anyway).
** Changed in: libvirt (Ubuntu Bionic)
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864279
Title:
Unable to attach more than 6 scsi volumes
Status in Ubuntu Cloud Archive:
New
Status in OpenStack Compute (nova):
Won't Fix
Status in libvirt package in Ubuntu:
Fix Released
Status in libvirt source package in Bionic:
Won't Fix
Status in libvirt source package in Focal:
Fix Released
Status in libvirt source package in Groovy:
Fix Released
Status in libvirt source package in Hirsute:
Fix Released
Bug description:
Scsi volume with unit number 7 can not be attached because of this
libvirt check:
https://github.com/libvirt/libvirt/blob/89237d534f0fe950d06a2081089154160c6c2224/src/conf/domain_conf.c#L4796
Nova automatically increase volume unit number by 1, and when I attach 7th volume to vm I've got this error:
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [req-156a4725-279d-4173-9f11-85125e4a3e47] [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] Failed to attach volume at mountpoint: /dev/sdh: libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] Traceback (most recent call last):
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1810, in attach_volume
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] guest.attach_device(conf, persistent=True, live=live)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 305, in attach_device
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] self._domain.attachDeviceFlags(device_xml, flags=flags)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 190, in doit
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 148, in proxy_call
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] rv = execute(f, *args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 129, in execute
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] six.reraise(c, e, tb)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] raise value
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] rv = meth(*args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/libvirt.py", line 605, in attachDeviceFlags
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f]
After patching libvirt driver to skip unit 7 I can attach more than 6
volumes.
ii nova-compute 2:20.0.0-0ubuntu1~cloud0
ii nova-compute-kvm 2:20.0.0-0ubuntu1~cloud0
ii nova-compute-libvirt 2:20.0.0-0ubuntu1~cloud0
ii libvirt0:amd64 5.4.0-0ubuntu5~cloud0
ii librbd1 14.2.4-1bionic
ii libvirt-daemon-driver-storage-rbd 5.4.0-0ubuntu5~cloud0
ii python-rbd 14.2.4-1bionic
ii python3-rbd 14.2.4-1bionic
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1864279/+subscriptions
References