yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #78545
[Bug 1606741] Re: [SRU] Metadata service for instances is unavailable when the l3-agent on the compute host is dvr_snat mode
This bug was fixed in the package neutron - 2:13.0.2-0ubuntu3.2
---------------
neutron (2:13.0.2-0ubuntu3.2) cosmic; urgency=medium
* Backport fix for dvr+l3ha metadata service not available
- d/p/Spawn-metadata-proxy-on-dvr-ha-standby-routers.patch (LP: #1606741)
-- Edward Hope-Morley <edward.hope-morley@xxxxxxxxxxxxx> Fri, 10 May
2019 17:24:31 +0100
** Changed in: neutron (Ubuntu Cosmic)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606741
Title:
[SRU] Metadata service for instances is unavailable when the l3-agent
on the compute host is dvr_snat mode
Status in Ubuntu Cloud Archive:
Fix Released
Status in Ubuntu Cloud Archive queens series:
Triaged
Status in Ubuntu Cloud Archive rocky series:
Fix Committed
Status in Ubuntu Cloud Archive stein series:
Fix Released
Status in neutron:
Fix Released
Status in neutron package in Ubuntu:
Fix Released
Status in neutron source package in Bionic:
Fix Committed
Status in neutron source package in Cosmic:
Fix Released
Status in neutron source package in Disco:
Fix Released
Status in neutron source package in Eoan:
Fix Released
Bug description:
[Impact]
Currently if you deploy Openstack with dvr and l3ha enabled (and > 1 compute host) only instances that are booted on the compute host that is running the VR master will have access to metadata. This patch ensures that both master and slave VRs have an associated haproxy ns-metadata proccess running local to the compute host.
[Test Case]
* deploy Openstack with dvr and l3ha enabled with 2 compute hosts
* create an ubuntu instance on each compute hosts
* check that both are able to access the metadata api (i.e. cloud-init completes successfully)
* verify that there is an ns-metadata haproxy process running on each compute host
[Regression Potential]
None anticipated
=============================================================================
In my mitaka environment, there are five nodes here, including
controller, network1, network2, computer1, computer2 node. I start
l3-agents with dvr_snat mode in all network and compute nodes and set
enable_metadata_proxy to true in l3-agent.ini. It works well for most
neutron services unless the metadata proxy service. When I run command
"curl http://169.254.169.254" in an instance booting from cirros, it
returns "curl: couldn't connect to host" and the instance can't fetch
metadata in its first booting.
* Pre-conditions: start l3-agent with dvr_snat mode in all computer
and network nodes and set enable_metadata_proxy to true in
l3-agent.ini.
* Step-by-step reproduction steps:
1.create a network and a subnet under this network;
2.create a router;
3.add the subnet to the router
4.create an instance with cirros (or other images) on this subnet
5.open the console for this instance and run command 'curl http://169.254.169.254' in bash, waiting for result.
* Expected output: this command should return the true metadata info
with the command 'curl http://169.254.169.254'
* Actual output: the command actually returns "curl: couldn't connect
to host"
* Version:
** Mitaka
** All hosts are centos7
To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1606741/+subscriptions
References