← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1745618] [NEW] neutron metadata agent is always binding to 0.0.0.0

 

Public bug reported:

Dear Devs,

while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
spotted one potential security issue with the way Neutron metadata agent
is listening.

Potential, because it all depends whether users are adding anything
sensitive to their meta-data / user-data.

ns-metadata-proxy always binds to a 0.0.0.0
https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64

$ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
...

My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
gateway and 10.0.0.2-10.0.0.254 is the allocation pool.

$ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
    link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
    inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53

I am running Docker containers (via Kubernetes) in Openstack VM's.
What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
Pretty much any IP address available on the namespaced network interface will return Metadata if accessed via HTTP port 80.

I am using this iptables rule so that no Docker container is able to
access the 169.254.169.254 as they do not need to access it:

iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
the first subchain in the FORWARD chain

That works well for blocking random users accessing the 169.254.169.254.

(As a workaround) I am modifying the driver.py directly so that it will
listen only over 169.254.169.254:

docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
packages/neutron/agent/metadata/driver.py"

docker restart neutron_dhcp_agent

>From your point of view, does it makes sense to change the default bind
0.0.0.0 to bind 169.254.169.254 ?

In meanwhile, I have prepared a little patch to neutron ns-metadata-
proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
169.254.169.254. Please let me know if it makes sense to you so I will
prepare a PR.

I have also attached a preliminary patch to this issue, but haven't
tested it yet.

Kind regards,
Andrey Arapov

** Affects: neutron
     Importance: Undecided
         Status: New

** Patch added: "preliminary patch"
   https://bugs.launchpad.net/bugs/1745618/+attachment/5043509/+files/0001-Bind-metadata-listener-address-to-dhcp.METADATA_DEFA.patch

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potentially security issue with the way Neutron metadata
  agent is listening.
  
  Potentially, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
- Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
- tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy   
- tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq       
- tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq   
+ Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
+ tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
+ tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
+ tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
- 
- My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
+ My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
+ gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
-     link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
-     inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
-     inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
+     link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+     inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
+     inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in these VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 since they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
- the first rule in the FORWARD chain
+ the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
- 
- From your point of view, does it makes sense to change the default bind 0.0.0.0 to bind 169.254.169.254 ?
+ From your point of view, does it makes sense to change the default bind
+ 0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if you want it, I will prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
- spotted one potentially security issue with the way Neutron metadata
- agent is listening.
+ spotted one potential security issue with the way Neutron metadata agent
+ is listening.
  
- Potentially, because it all depends whether users are adding anything
+ Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in these VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 since they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if you want it, I will prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata agent
  is listening.
  
  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
- ns-metadata-proxy binds to a 0.0.0.0
+ ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in these VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 since they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if you want it, I will prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata agent
  is listening.
  
  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
- I am running Docker containers (via Kubernetes) in these VM's.
+ I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 since they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if you want it, I will prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata agent
  is listening.
  
  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
- access the 169.254.169.254 since they do not need to access it:
+ access the 169.254.169.254 as they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if you want it, I will prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata agent
  is listening.
  
  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 as they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
- 169.254.169.254. Please let me know if you want it, I will prepare a PR.
+ 169.254.169.254. Please let me know if it makes sense to you so I will
+ prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

** Description changed:

  Dear Devs,
  
  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata agent
  is listening.
  
  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.
  
  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...
  
  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.
  
  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53
  
  I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
+ Pretty much any IP address available on the namespaced network interface will return Metadata if accessed via HTTP port 80.
  
  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 as they do not need to access it:
  
  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain
  
  That works well for blocking random users accessing the 169.254.169.254.
  
  (As a workaround) I am modifying the driver.py directly so that it will
  listen only over 169.254.169.254:
  
  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"
  
  docker restart neutron_dhcp_agent
  
  From your point of view, does it makes sense to change the default bind
  0.0.0.0 to bind 169.254.169.254 ?
  
  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if it makes sense to you so I will
  prepare a PR.
  
  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.
  
  Kind regards,
  Andrey Arapov

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1745618

Title:
  neutron metadata agent is always binding to 0.0.0.0

Status in neutron:
  New

Bug description:
  Dear Devs,

  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata
  agent is listening.

  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.

  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64

  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...

  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.

  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53

  I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  Pretty much any IP address available on the namespaced network interface will return Metadata if accessed via HTTP port 80.

  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 as they do not need to access it:

  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain

  That works well for blocking random users accessing the
  169.254.169.254.

  (As a workaround) I am modifying the driver.py directly so that it
  will listen only over 169.254.169.254:

  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"

  docker restart neutron_dhcp_agent

  From your point of view, does it makes sense to change the default
  bind 0.0.0.0 to bind 169.254.169.254 ?

  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if it makes sense to you so I will
  prepare a PR.

  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.

  Kind regards,
  Andrey Arapov

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1745618/+subscriptions


Follow ups