yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #01760
[Bug 1157599] Re: Using an iscsi device in the nova-volume VG lets nova-volume crash on system boot
Can you add --volume_group nova-volumes to your /etc/nova/nova.conf and
try again. Please re-open if this is still a problem.
** Changed in: nova (Ubuntu)
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1157599
Title:
Using an iscsi device in the nova-volume VG lets nova-volume crash on
system boot
Status in OpenStack Compute (Nova):
Invalid
Status in “nova” package in Ubuntu:
Won't Fix
Bug description:
Installed is an Ubuntu 12.04 server with the respective nova-packages coming along with Ubuntu.
I have a RAID attached via iscsi to a node that runs nova-volume. This RAID device is in the nova-volumes VG:
root@vs-node4:/root# vgdisplay nova-volumes
--- Volume group ---
VG Name nova-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 23
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 3,19 TiB
PE Size 4,00 MiB
Total PE 836819
Alloc PE / Size 514560 / 1,96 TiB
Free PE / Size 322259 / 1,23 TiB
VG UUID 51A9z4-gdpn-FX9M-uH2T-qMM0-5Z90-bI6B29
However when booting the whole system, nova-volume crashes, as the
volume-group is not appearing in time:
2013-03-20 07:46:35 DEBUG nova.utils [req-9b8655d3-5471-48a9-946b-3b87e8d7280a None None] Führe Kommando (subprocess) aus: sudo nova-rootwrap vgs --noheadings -o name from (pid=1010) execute /usr/lib/python2.7/dist-packages/nova/utils.py:219
2013-03-20 07:46:37 CRITICAL nova [-] volume group nova-volumes doesn't exist
2013-03-20 07:46:37 TRACE nova Traceback (most recent call last):
2013-03-20 07:46:37 TRACE nova File "/usr/bin/nova-volume", line 49, in <module>
2013-03-20 07:46:37 TRACE nova service.wait()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 413, in wait
2013-03-20 07:46:37 TRACE nova _launcher.wait()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 131, in wait
2013-03-20 07:46:37 TRACE nova service.wait()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait
2013-03-20 07:46:37 TRACE nova return self._exit_event.wait()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2013-03-20 07:46:37 TRACE nova return hubs.get_hub().switch()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch
2013-03-20 07:46:37 TRACE nova return self.greenlet.switch()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main
2013-03-20 07:46:37 TRACE nova result = function(*args, **kwargs)
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 101, in run_server
2013-03-20 07:46:37 TRACE nova server.start()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 162, in start
2013-03-20 07:46:37 TRACE nova self.manager.init_host()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line 93, in init_host
2013-03-20 07:46:37 TRACE nova self.driver.check_for_setup_error()
2013-03-20 07:46:37 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/volume/driver.py", line 107, in check_for_setup_error
2013-03-20 07:46:37 TRACE nova % FLAGS.volume_group)
2013-03-20 07:46:37 TRACE nova Error: volume group nova-volumes doesn't exist
When issuing a 'service nova-volume start' afterwards, nova-volume is
up and running well. So the service is started too early as it seems.
The crash brings me into serious trouble, as then instances with
attached volumes don't get access to their volumes. Terminating
instances then leads to zombie instances and attached volumes that
can't be detached anymore. Sorting out things manually in the nova
database enabled me to start all instances again and properly attach
all volumes. But nothing i want to do on every reboot of that node.
Any suggestions? With pohysically attached disks thinsg work fine of
course.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1157599/+subscriptions