yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #85145
[Bug 1864279] Re: Unable to attach more than 6 scsi volumes
>From libvirt package POV this is fixed in >=Focal (20.04) and the only
release left affected is Bionic (18.04, libvirt 4.0).
Fixing it in Bionic is IMHO worth to consider but low prio (if users can
influence, they can set an address to 8-16 and it works, only the
default-no-address use case is broken).
Never the less, fixing it in Bionic won't help your cases on UCA with
5.0/5.4 - those would have to be done by the UCA team. Therefore I'm
adding back a cloud-archive task.
** Also affects: cloud-archive
Importance: Undecided
Status: New
** Also affects: libvirt (Ubuntu Bionic)
Importance: Undecided
Status: New
** Also affects: libvirt (Ubuntu Focal)
Importance: Undecided
Status: New
** Also affects: libvirt (Ubuntu Hirsute)
Importance: Undecided
Status: New
** Also affects: libvirt (Ubuntu Groovy)
Importance: Undecided
Status: New
** Changed in: libvirt (Ubuntu Groovy)
Status: New => Fix Released
** Changed in: libvirt (Ubuntu Focal)
Status: New => Fix Released
** Changed in: libvirt (Ubuntu Hirsute)
Status: New => Fix Released
** Changed in: libvirt (Ubuntu Bionic)
Status: New => Triaged
** Changed in: libvirt (Ubuntu Bionic)
Importance: Undecided => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1864279
Title:
Unable to attach more than 6 scsi volumes
Status in Ubuntu Cloud Archive:
New
Status in OpenStack Compute (nova):
Won't Fix
Status in libvirt package in Ubuntu:
Fix Released
Status in libvirt source package in Bionic:
Triaged
Status in libvirt source package in Focal:
Fix Released
Status in libvirt source package in Groovy:
Fix Released
Status in libvirt source package in Hirsute:
Fix Released
Bug description:
Scsi volume with unit number 7 can not be attached because of this
libvirt check:
https://github.com/libvirt/libvirt/blob/89237d534f0fe950d06a2081089154160c6c2224/src/conf/domain_conf.c#L4796
Nova automatically increase volume unit number by 1, and when I attach 7th volume to vm I've got this error:
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [req-156a4725-279d-4173-9f11-85125e4a3e47] [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] Failed to attach volume at mountpoint: /dev/sdh: libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] Traceback (most recent call last):
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1810, in attach_volume
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] guest.attach_device(conf, persistent=True, live=live)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 305, in attach_device
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] self._domain.attachDeviceFlags(device_xml, flags=flags)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 190, in doit
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 148, in proxy_call
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] rv = execute(f, *args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 129, in execute
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] six.reraise(c, e, tb)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] raise value
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] rv = meth(*args, **kwargs)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] File "/usr/lib/python3/dist-packages/libvirt.py", line 605, in attachDeviceFlags
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f] libvirt.libvirtError: Requested operation is not valid: Domain already contains a disk with that address
2020-02-21 09:12:53.309 3572 ERROR nova.virt.libvirt.driver [instance: 3532baf6-a0a4-4a81-84f9-3622c713435f]
After patching libvirt driver to skip unit 7 I can attach more than 6
volumes.
ii nova-compute 2:20.0.0-0ubuntu1~cloud0
ii nova-compute-kvm 2:20.0.0-0ubuntu1~cloud0
ii nova-compute-libvirt 2:20.0.0-0ubuntu1~cloud0
ii libvirt0:amd64 5.4.0-0ubuntu5~cloud0
ii librbd1 14.2.4-1bionic
ii libvirt-daemon-driver-storage-rbd 5.4.0-0ubuntu5~cloud0
ii python-rbd 14.2.4-1bionic
ii python3-rbd 14.2.4-1bionic
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1864279/+subscriptions
References