← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1745618] Re: neutron metadata agent is always binding to 0.0.0.0

 

Reviewed:  https://review.openstack.org/634947
Committed: https://git.openstack.org/cgit/openstack/networking-ovn/commit/?id=78dcb186ef53f1ccdafcc0301aab51689eb7c7e8
Submitter: Zuul
Branch:    master

commit 78dcb186ef53f1ccdafcc0301aab51689eb7c7e8
Author: Daniel Alvarez <dalvarez@xxxxxxxxxx>
Date:   Tue Feb 5 15:36:02 2019 +0100

    ovn-metadata-agent: bind haproxy to 169.254.169.254
    
    Currently ovn-metadata-agent spawns haproxy to bind in 0.0.0.0 inside
    the ovnmeta namespace. This is not needed as we know that we'll always
    receive those in 169.254.169.254 and it can have security issues.
    
    Closes-Bug: #1745618
    Signed-off-by: Daniel Alvarez <dalvarez@xxxxxxxxxx>
    Change-Id: I19ba651ad5b120ecb9859d9f7786b447f3218078


** Changed in: networking-ovn
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1745618

Title:
  neutron metadata agent is always binding to 0.0.0.0

Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Dear Devs,

  while using kolla-ansible (5.0.1) to deploy Openstack Pike, I have
  spotted one potential security issue with the way Neutron metadata
  agent is listening.

  Potential, because it all depends whether users are adding anything
  sensitive to their meta-data / user-data.

  ns-metadata-proxy always binds to a 0.0.0.0
  https://github.com/openstack/neutron/blob/703ff85b8262997f209e7666396c5d430d3baa34/neutron/agent/metadata/driver.py#L64

  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 netstat -tulpan
  Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
  tcp        0      0 0.0.0.0:80      0.0.0.0:*               LISTEN      22103/haproxy
  tcp        0      0 10.0.0.2:53             0.0.0.0:*               LISTEN      22446/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      22446/dnsmasq
  ...

  My Openstack has a private subnet 10.0.0.0/24, where 10.0.0.1 is a
  gateway and 10.0.0.2-10.0.0.254 is the allocation pool.

  $ ip netns exec qdhcp-f2780ea0-8d83-4434-9d0f-914392d1c3b1 ip a
  2: ns-a1f7e93e-53@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
      link/ether fa:16:3e:07:8a:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
      inet 10.0.0.2/24 brd 10.0.0.255 scope global ns-a1f7e93e-53
      inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-a1f7e93e-53

  I am running Docker containers (via Kubernetes) in Openstack VM's.
  What concerns me is that any container (with its namespaced container network) is able to access the Neutron metadata agent not only via http://169.254.169.254/, but also via http://10.0.0.2/ (since the latter is on the same private subnet).
  Pretty much any IP address available on the namespaced network interface will return Metadata if accessed via HTTP port 80.

  I am using this iptables rule so that no Docker container is able to
  access the 169.254.169.254 as they do not need to access it:

  iptables -I DOCKER-USER -d 169.254.169.254/32  # where DOCKER-USER is
  the first subchain in the FORWARD chain

  That works well for blocking random users accessing the
  169.254.169.254.

  (As a workaround) I am modifying the driver.py directly so that it
  will listen only over 169.254.169.254:

  docker exec -u root -ti neutron_dhcp_agent bash -c "sed -i 's/bind
  0.0.0.0/bind 169.254.169.254/' /usr/lib/python2.7/site-
  packages/neutron/agent/metadata/driver.py"

  docker restart neutron_dhcp_agent

  From your point of view, does it makes sense to change the default
  bind 0.0.0.0 to bind 169.254.169.254 ?

  In meanwhile, I have prepared a little patch to neutron ns-metadata-
  proxy so that the listener binds to dhcp.METADATA_DEFAULT_IP which is
  169.254.169.254. Please let me know if it makes sense to you so I will
  prepare a PR.

  I have also attached a preliminary patch to this issue, but haven't
  tested it yet.

  Kind regards,
  Andrey Arapov

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1745618/+subscriptions


References