yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #51109
[Bug 1340641] Re: nova-api crashes when using ipv6-address for metadata API
This bug report is pretty old. If you can reproduce it on a currently
supported release [1], please reopen this report and add the steps to
reproduce.
References:
http://releases.openstack.org/
** Changed in: nova
Assignee: Nadja Deininger (nadja) => (unassigned)
** Changed in: nova
Status: Confirmed => Opinion
** Changed in: nova
Importance: Wishlist => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340641
Title:
nova-api crashes when using ipv6-address for metadata API
Status in OpenStack Compute (nova):
Opinion
Bug description:
I'm doing openstack icehouse controller installation inside virtualbox
with ipv6 configurations when I'm installing nova.
When I use ipv6 address for the metadata API (metadata_listen =
2001:db8:0::1, metadata_host = 2001:db8:0::1), nova-api crashes soon
after launching and with ipv4 everything seems to be running like
charm(metadata_listen = 198.168.0.1, metadata_host = 198.168.0.1).
e.g. when i restart my nova processes and run 'nova list' command
twice as root following things occurs:
# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
# nova list
ERROR: HTTPConnectionPool(host='ctrl', port=8774): Max retries exceeded with url: /v2/d117e271b78248de8a26e572197fd149/servers/detail (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
Here is the trace from nova-api.log:
2014-05-16 20:41:28.602 22728 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] Result was 2 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:28.646 22728 DEBUG nova.openstack.common.processutils [-] ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'iptables-restore', '-c'] failed. Retrying. execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:199
2014-05-16 20:41:30.278 22728 DEBUG nova.openstack.common.processutils [-] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.processutils [-] Result was 2 execute /usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-05-16 20:41:30.348 22728 DEBUG nova.openstack.common.lockutils [-] Released file lock "iptables" at /run/lock/nova/nova-iptables lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:210
2014-05-16 20:41:30.349 22728 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_apply" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252
2014-05-16 20:41:30.349 22728 CRITICAL nova [-] ProcessExecutionError: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
Exit code: 2
Stdout: ''
Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n"
2014-05-16 20:41:30.349 22728 TRACE nova Traceback (most recent call last):
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/bin/nova-api", line 10, in <module>
2014-05-16 20:41:30.349 22728 TRACE nova sys.exit(main())
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/cmd/api.py", line 53, in main
2014-05-16 20:41:30.349 22728 TRACE nova server = service.WSGIService(api, use_ssl=should_use_ssl)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 329, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova self.manager = self._get_manager()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 373, in _get_manager
2014-05-16 20:41:30.349 22728 TRACE nova return manager_class()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/api/manager.py", line 30, in __init__
2014-05-16 20:41:30.349 22728 TRACE nova self.network_driver.metadata_accept()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 660, in metadata_accept
2014-05-16 20:41:30.349 22728 TRACE nova iptables_manager.apply()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 428, in apply
2014-05-16 20:41:30.349 22728 TRACE nova self._apply()
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 249, in inner
2014-05-16 20:41:30.349 22728 TRACE nova return f(*args, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 457, in _apply
2014-05-16 20:41:30.349 22728 TRACE nova attempts=5)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 1205, in _execute
2014-05-16 20:41:30.349 22728 TRACE nova return utils.execute(*cmd, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 164, in execute
2014-05-16 20:41:30.349 22728 TRACE nova return processutils.execute(*cmd, **kwargs)
2014-05-16 20:41:30.349 22728 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py", line 193, in execute
2014-05-16 20:41:30.349 22728 TRACE nova cmd=' '.join(cmd))
2014-05-16 20:41:30.349 22728 TRACE nova ProcessExecutionError: Unexpected error while running command.
2014-05-16 20:41:30.349 22728 TRACE nova Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
2014-05-16 20:41:30.349 22728 TRACE nova Exit code: 2
2014-05-16 20:41:30.349 22728 TRACE nova Stdout: ''
2014-05-16 20:41:30.349 22728 TRACE nova Stderr: "iptables-restore v1.4.21: host/network `::1' not found\nError occurred at line: 17\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n"
2014-05-16 20:41:30.349 22728 TRACE nova
2014-05-16 20:41:30.496 22854 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-05-16 20:41:30.501 22854 INFO nova.wsgi [-] Stopping WSGI server.
2014-05-16 20:41:30.498 22828 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-05-16 20:41:30.502 22828 INFO nova.wsgi [-] Stopping WSGI server.
My nova.conf file is following:
[DEFAULT]
use_ipv6 = True
my_ip = 2001:db8:0::1
rpc_backend = rabbit
rabbit_host = ctrl
# verbose = True
debug = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /run/lock/nova
s3_host = ctrl
ec2_host = ctrl
ec2_dmz_host = ctrl
cc_host = ctrl
ec2_url = http://ctrl:8773/services/Cloud
nova_url = http://ctrl:8774/v1.1/
api_paste_config = /etc/nova/api-paste.ini
root_helper = sudo nova-rootwrap /etc/nova/rootwrap.conf
resume_guests_state_on_host_boot = True
osapi_compute_listen = 2001:db8:0::1
osapi_compute_listen_port = 8774
# Scheduler
# scheduler_driver = nova.scheduler.simple.SimpleScheduler
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
# Metadata stuff
metadata_listen = 2001:db8:0::1
metadata_host = 2001:db8:0::1
metadata_port = 8775
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = metasecret13
# Auth
use_deprecated_auth = false
auth_strategy = keystone
keystone_ec2_url = http://ctrl:5000/v2.0/ec2tokens
# Imaging service
glance_api_servers = ctrl:9292
image_service = nova.image.glance.GlanceImageService
# INSTANCE DISK BACKEND
libvirt_images_type = lvm
libvirt_images_volume_group = nova-local
libvirt_sparse_logical_volumes = false
# VNC configuration - Dual-Stacked - DISABLED, go for SPICE instead!
vnc_enabled = False
novnc_enabled = False
# novncproxy_base_url = http://ctrl:6080/vnc_auto.html
# novncproxy_host = ::
# novncproxy_port = 6080
# NETWORK - NEUTRON
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://ctrl:9696/
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = service_pass
neutron_admin_auth_url = http://ctrl:35357/v2.0/
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
# firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
# libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
# libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver
# Cinder
volume_api_class = nova.volume.cinder.API
osapi_volume_listen_port = 5900
# SPICE configuration - Dual-Stacked
[spice]
enabled = True
spicehtml5proxy_host = ::
html5proxy_base_url = http://ctrl:6082/spice_auto.html
keymap = en-us
[database]
connection = mysql://novaUser:novaPass@ctrl/nova
[keystone_authtoken]
auth_uri = http://ctrl:5000
auth_host = ctrl
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = service_pass
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340641/+subscriptions
References