← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1485606] [NEW] Unable to enable dhcp for networkid

 

Public bug reported:

 Installed Mirantis Open Stack 6.0 release in ubuntu. Neutron vlan
network is created but the VMs created on private network are not
getting DHCP IP address.

Toploogy : 1 fuel node, 1 controller and 2 compute nodes.

All eth0s are connected to L2 switch for PXE boot, 
eth1s are connected to uplink switch for connectivity, 
eth2 are connected to L2 switch for Management/Storage network and 
eth3s are connected to L2 switch for Private network

Verified the connections between Fuel, controller and compute nodes,
everything is proper. Seeing "Unable to enable dhcp for <network id>“ in
/var/log/neutron/dhcp-agent.log.

2015-07-20 10:05:23.255 10139 ERROR neutron.agent.dhcp_agent [req-fe1305ed-17da-4863-9b04-ca418495256d None] Unable to enable dhcp for 0aff4aa3-e393-499d-b3bc-5c90dd3655b1.
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent Traceback (most recent call last):
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 129, in call_driver
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     getattr(driver, action)(**action_kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 204, in enable
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     interface_name = self.device_manager.setup(self.network)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 929, in setup
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     port = self.setup_dhcp_port(network)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 910, in setup_dhcp_port
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     dhcp_port = self.plugin.create_dhcp_port({'port': port_dict})
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 439, in create_dhcp_port
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     host=self.host))
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 34, in wrapper
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return method(*args, **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 161, in call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     context, msg, rpc_method='call', **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 187, in __call_rpc_method
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return func(context, msg['method'], **msg['args'])
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return self.prepare().call(ctxt, method, **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     retry=self.retry)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     timeout=timeout, retry=retry)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 434, in send


This is server log output - seeing AMPQ connection issue

2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit     (class_id, method_id), ConnectionError)
2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'
2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit
2015-07-20 10:01:30.215 9808 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 5.0 seconds ...
2015-07-20 10:01:30.216 9808 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 681, in ensure
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return method()
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 767, in _consume
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return self.connection.drain_events(timeout=timeout)
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 279, in drain_events
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return self.transport.drain_events(self.connection, **kwargs)
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/transport/pyamqp.py", line 91, in drain_events
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return connection.drain_events(**kwargs)
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 320, in drain_events
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return amqp_method(channel, args)
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 526, in _close
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     (class_id, method_id), ConnectionError)
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'


TCPDUMP on controller node’s eth3 interface
Controller node is receiving the dhcp request from the instance and sending the response to compute node.
# tcpdump -i eth3 port 67 or port 68 -i eth3
tcpdump: WARNING: eth3: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes
06:13:59.844599 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 281
06:13:59.845346 IP 19.0.0.2.bootps > 19.0.0.3.bootpc: BOOTP/DHCP, Reply, length 323
06:16:03.988711 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 280
06:16:03.989461 IP 19.0.0.2.bootps > 19.0.0.3.bootpc: BOOTP/DHCP, Reply, length 323


TCPDUMP on compute node’s eth3 interface
Compute node is not receiving the dhcp response from controller node.

root@node-21:~# tcpdump -i eth3 port 67 or port 68 -i eth3
tcpdump: WARNING: eth3: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes
06:13:59.855193 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 281
06:16:03.999378 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 280

Because sometimes we have seen private network VMs got DHCP IP addressed automatically. At that time we haven’t seen the below error for that particular network in dhcp-agent.log.
 
Most of the times VMs are not getting DHCP IP automatically, for those private networks we are seeing same error in dhcp-agent.log.
 
ERROR neutron.agent.dhcp_agent Unable to enable dhcp for <network_id>”. Attaching the error seen in dhcp-agent.log.


Frequently We are seeing this error in /etc/neutron/server.log, connection broken to AMQP server on 192.168.0.2. After 4-5 seconds it it connected.
2015-08-06 08:48:08.193 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
2015-08-06 08:48:08.276 3007 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 192.168.0.2:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds.
2015-08-06 08:48:09.279 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 5.0 seconds ...
2015-08-06 08:48:14.279 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
2015-08-06 08:48:14.292 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.0.2:5672
2015-08-06 08:48:14.372 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
2015-08-06 08:48:14.390 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.0.2:5672

 I just compared the configuration/status with other open stack setup,
we have similar configuration/status but we don’t see this AMQP
connectivity issue there

# service rabbitmq-server status
Status of node 'rabbit@node-19' ...
[{pid,17084},
{running_applications,[{rabbit,"RabbitMQ","3.3.5"},
                        {mnesia,"MNESIA  CXC 138 12","4.5"},
                        {os_mon,"CPO  CXC 138 46","2.2.7"},
                        {xmerl,"XML parser","1.2.10"},
                        {sasl,"SASL  CXC 138 11","2.1.10"},
                        {stdlib,"ERTS  CXC 138 10","1.17.5"},
                        {kernel,"ERTS  CXC 138 10","2.14.5"}]},
{os,{unix,linux}},
{erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:30] [kernel-poll:true]\n"},
{memory,[{total,100398616},
          {connection_procs,2677296},
          {queue_procs,1835520},
          {plugins,0},
          {other_proc,9202256},
          {mnesia,359896},
          {mgmt_db,0},
          {msg_index,86360},
          {other_ets,940024},
          {binary,441088},
          {code,14634007},
          {atom,1368201},
          {other_system,68853968}]},
{alarms,[]},
{listeners,[{clustering,41055,"::"},{amqp,5672,"::"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,6667083776},
{disk_free_limit,50000000},
{disk_free,34774179840},
{file_descriptors,[{total_limit,102300},
                    {total_used,55},
                    {sockets_limit,92068},
                    {sockets_used,53}]},
{processes,[{limit,1048576},{used,949}]},
{run_queue,0},
{uptime,847}]
...done.

# netstat -ntpl | grep 41055
tcp        0      0 0.0.0.0:41055           0.0.0.0:*               LISTEN      17084/beam.smp

# netstat -ntl | grep 5672
tcp6       0      0 :::5672                 :::*                    LISTEN

>From Controller node:
root@node-19:~# ping 192.168.0.4 -p 5672
PATTERN: 0x5672
PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data.
64 bytes from 192.168.0.4: icmp_req=1 ttl=64 time=0.146 ms
64 bytes from 192.168.0.4: icmp_req=2 ttl=64 time=0.219 ms
64 bytes from 192.168.0.4: icmp_req=3 ttl=64 time=0.119 ms

>From Compute node:
root@node-21:~# ping 192.168.0.2 -p 5672
PATTERN: 0x5672
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_req=1 ttl=64 time=0.190 ms
64 bytes from 192.168.0.2: icmp_req=2 ttl=64 time=0.169 ms
64 bytes from 192.168.0.2: icmp_req=3 ttl=64 time=0.144 ms

** Affects: neutron
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485606

Title:
  Unable to enable dhcp for networkid

Status in neutron:
  New

Bug description:
   Installed Mirantis Open Stack 6.0 release in ubuntu. Neutron vlan
  network is created but the VMs created on private network are not
  getting DHCP IP address.

  Toploogy : 1 fuel node, 1 controller and 2 compute nodes.

  All eth0s are connected to L2 switch for PXE boot, 
  eth1s are connected to uplink switch for connectivity, 
  eth2 are connected to L2 switch for Management/Storage network and 
  eth3s are connected to L2 switch for Private network

  Verified the connections between Fuel, controller and compute nodes,
  everything is proper. Seeing "Unable to enable dhcp for <network id>“
  in /var/log/neutron/dhcp-agent.log.

  2015-07-20 10:05:23.255 10139 ERROR neutron.agent.dhcp_agent [req-fe1305ed-17da-4863-9b04-ca418495256d None] Unable to enable dhcp for 0aff4aa3-e393-499d-b3bc-5c90dd3655b1.
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent Traceback (most recent call last):
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 129, in call_driver
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     getattr(driver, action)(**action_kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 204, in enable
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     interface_name = self.device_manager.setup(self.network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 929, in setup
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     port = self.setup_dhcp_port(network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 910, in setup_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     dhcp_port = self.plugin.create_dhcp_port({'port': port_dict})
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 439, in create_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     host=self.host))
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 34, in wrapper
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return method(*args, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 161, in call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     context, msg, rpc_method='call', **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 187, in __call_rpc_method
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return func(context, msg['method'], **msg['args'])
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     return self.prepare().call(ctxt, method, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     retry=self.retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in _send
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent     timeout=timeout, retry=retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 434, in send


  
  This is server log output - seeing AMPQ connection issue

  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit     (class_id, method_id), ConnectionError)
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit
  2015-07-20 10:01:30.215 9808 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 5.0 seconds ...
  2015-07-20 10:01:30.216 9808 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit Traceback (most recent call last):
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 681, in ensure
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return method()
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 767, in _consume
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return self.connection.drain_events(timeout=timeout)
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 279, in drain_events
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return self.transport.drain_events(self.connection, **kwargs)
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/kombu/transport/pyamqp.py", line 91, in drain_events
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return connection.drain_events(**kwargs)
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 320, in drain_events
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     return amqp_method(channel, args)
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File "/usr/lib/python2.7/dist-packages/amqp/connection.py", line 526, in _close
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit     (class_id, method_id), ConnectionError)
  2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'


  TCPDUMP on controller node’s eth3 interface
  Controller node is receiving the dhcp request from the instance and sending the response to compute node.
  # tcpdump -i eth3 port 67 or port 68 -i eth3
  tcpdump: WARNING: eth3: no IPv4 address assigned
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes
  06:13:59.844599 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 281
  06:13:59.845346 IP 19.0.0.2.bootps > 19.0.0.3.bootpc: BOOTP/DHCP, Reply, length 323
  06:16:03.988711 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 280
  06:16:03.989461 IP 19.0.0.2.bootps > 19.0.0.3.bootpc: BOOTP/DHCP, Reply, length 323

  
  TCPDUMP on compute node’s eth3 interface
  Compute node is not receiving the dhcp response from controller node.

  root@node-21:~# tcpdump -i eth3 port 67 or port 68 -i eth3
  tcpdump: WARNING: eth3: no IPv4 address assigned
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth3, link-type EN10MB (Ethernet), capture size 65535 bytes
  06:13:59.855193 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 281
  06:16:03.999378 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:43:73:98 (oui Unknown), length 280

  Because sometimes we have seen private network VMs got DHCP IP addressed automatically. At that time we haven’t seen the below error for that particular network in dhcp-agent.log.
   
  Most of the times VMs are not getting DHCP IP automatically, for those private networks we are seeing same error in dhcp-agent.log.
   
  ERROR neutron.agent.dhcp_agent Unable to enable dhcp for <network_id>”. Attaching the error seen in dhcp-agent.log.

  
  Frequently We are seeing this error in /etc/neutron/server.log, connection broken to AMQP server on 192.168.0.2. After 4-5 seconds it it connected.
  2015-08-06 08:48:08.193 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
  2015-08-06 08:48:08.276 3007 ERROR oslo.messaging._drivers.impl_rabbit [-] AMQP server on 192.168.0.2:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds.
  2015-08-06 08:48:09.279 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Delaying reconnect for 5.0 seconds ...
  2015-08-06 08:48:14.279 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
  2015-08-06 08:48:14.292 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.0.2:5672
  2015-08-06 08:48:14.372 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.0.2:5672
  2015-08-06 08:48:14.390 3007 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.0.2:5672

   I just compared the configuration/status with other open stack setup,
  we have similar configuration/status but we don’t see this AMQP
  connectivity issue there

  # service rabbitmq-server status
  Status of node 'rabbit@node-19' ...
  [{pid,17084},
  {running_applications,[{rabbit,"RabbitMQ","3.3.5"},
                          {mnesia,"MNESIA  CXC 138 12","4.5"},
                          {os_mon,"CPO  CXC 138 46","2.2.7"},
                          {xmerl,"XML parser","1.2.10"},
                          {sasl,"SASL  CXC 138 11","2.1.10"},
                          {stdlib,"ERTS  CXC 138 10","1.17.5"},
                          {kernel,"ERTS  CXC 138 10","2.14.5"}]},
  {os,{unix,linux}},
  {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:30] [kernel-poll:true]\n"},
  {memory,[{total,100398616},
            {connection_procs,2677296},
            {queue_procs,1835520},
            {plugins,0},
            {other_proc,9202256},
            {mnesia,359896},
            {mgmt_db,0},
            {msg_index,86360},
            {other_ets,940024},
            {binary,441088},
            {code,14634007},
            {atom,1368201},
            {other_system,68853968}]},
  {alarms,[]},
  {listeners,[{clustering,41055,"::"},{amqp,5672,"::"}]},
  {vm_memory_high_watermark,0.4},
  {vm_memory_limit,6667083776},
  {disk_free_limit,50000000},
  {disk_free,34774179840},
  {file_descriptors,[{total_limit,102300},
                      {total_used,55},
                      {sockets_limit,92068},
                      {sockets_used,53}]},
  {processes,[{limit,1048576},{used,949}]},
  {run_queue,0},
  {uptime,847}]
  ...done.

  # netstat -ntpl | grep 41055
  tcp        0      0 0.0.0.0:41055           0.0.0.0:*               LISTEN      17084/beam.smp

  # netstat -ntl | grep 5672
  tcp6       0      0 :::5672                 :::*                    LISTEN

  From Controller node:
  root@node-19:~# ping 192.168.0.4 -p 5672
  PATTERN: 0x5672
  PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data.
  64 bytes from 192.168.0.4: icmp_req=1 ttl=64 time=0.146 ms
  64 bytes from 192.168.0.4: icmp_req=2 ttl=64 time=0.219 ms
  64 bytes from 192.168.0.4: icmp_req=3 ttl=64 time=0.119 ms

  From Compute node:
  root@node-21:~# ping 192.168.0.2 -p 5672
  PATTERN: 0x5672
  PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
  64 bytes from 192.168.0.2: icmp_req=1 ttl=64 time=0.190 ms
  64 bytes from 192.168.0.2: icmp_req=2 ttl=64 time=0.169 ms
  64 bytes from 192.168.0.2: icmp_req=3 ttl=64 time=0.144 ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485606/+subscriptions


Follow ups