On Mon, Mar 4, 2013 at 3:18 AM, Sylvain Bauza
<sylvain.bauza@xxxxxxxxxxxx <mailto:sylvain.bauza@xxxxxxxxxxxx>> wrote:
Is the network node also acting as a Compute node ?
No, I am running three separate nodes-- network, compute and controller.
The issue you were mentioning was related to the tap virtual
device (for DHCP leases) : if the network node goes down, then the
DHCP lease is expiring on the vm without being reack, and then
your instance is loosing its IP address.
By recreating the bridges upon reboot on the network node, the tap
interface will be back up. On the VMs, only a DHCP request is
enough, not a reboot (or even a compute node reboot).
I know there is also a second bug related to virtio bridges on the
compute nodes. This is still a bit unclear to me, but upon compute
node reboot, virtio bridges are also not reattached, only new
instances created afterwards.
Could you please run 'ovs-dpctl show br-int' (provided br-int is
the right bridge), 'ovs-vsctl show' and 'brctl show' ?
This is on the compute node, where I assume the issue is. For the
record, I have five vms running here-- four created before rebuilding
the networking, and one after. Only the one after is working.
root@os-compute-01:/var/log# ovs-dpctl show br-int
system@br-int:
lookups: hit:235944 missed:33169 lost:0
flows: 0
port 0: br-int (internal)
port 1: patch-tun (patch: peer=patch-int)
port 2: qvo7dcd14b3-70
root@os-compute-01:/var/log# ovs-vsctl show
3a52a17f-9846-4b32-b309-b49faf91bfc4
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-1"
Interface "gre-1"
type: gre
options: {in_key=flow, out_key=flow,
remote_ip="10.10.10.1"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "qvo7dcd14b3-70"
tag: 1
Interface "qvo7dcd14b3-70"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
ovs_version: "1.4.0+build0"
root@os-compute-01:/var/log# brctl show
bridge name bridge id STP enabled interfaces
br-int 0000.222603554b47 no qvo7dcd14b3-70
br-tun 0000.36c126165e42 no
qbr0b459c65-a0 8000.3af05347af11 no qvb0b459c65-a0
vnet2
qbr4f36c3ea-5c 8000.e6a5faf9a181 no qvb4f36c3ea-5c
vnet1
qbr62721ee8-08 8000.8af675d45ed7 no qvb62721ee8-08
vnet0
qbr7dcd14b3-70 8000.aabc605c1b2c no qvb7dcd14b3-70
vnet4
qbrcf833d2a-9e 8000.36e77dfc6018 no qvbcf833d2a-9e
vnet3
root@os-compute-01:/var/log#
Thank you for the assistance! Lot of new stuff here I'm trying to
come up to speed on.
Le 01/03/2013 21:28, The King in Yellow a écrit :
On Fri, Mar 1, 2013 at 10:11 AM, Sylvain Bauza
<sylvain.bauza@xxxxxxxxxxxx <mailto:sylvain.bauza@xxxxxxxxxxxx>>
wrote:
There is a known bug for the network bridges, when rebooting :
https://bugs.launchpad.net/quantum/+bug/1091605
Try to delete/recreate your br-int/br-ex and then restart
openvswitch_plugin/l3/dhcp agents, it should fix the issue.
Thanks! Now, I can create a new instance, and that works. My
previous instances don't work, however. What do I need to do to
get them reattached?
root@os-network:/var/log/quantum# ping 10.5.5.6
PING 10.5.5.6 (10.5.5.6) 56(84) bytes of data.
^C
--- 10.5.5.6 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms
root@os-network:/var/log/quantum# ping 10.5.5.7
PING 10.5.5.7 (10.5.5.7) 56(84) bytes of data.
64 bytes from 10.5.5.7 <http://10.5.5.7>: icmp_req=1 ttl=64
time=2.13 ms
64 bytes from 10.5.5.7 <http://10.5.5.7>: icmp_req=2 ttl=64
time=1.69 ms
64 bytes from 10.5.5.7 <http://10.5.5.7>: icmp_req=3 ttl=64
time=1.93 ms
64 bytes from 10.5.5.7 <http://10.5.5.7>: icmp_req=4 ttl=64
time=1.01 ms
^C
--- 10.5.5.7 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 1.013/1.692/2.132/0.424 ms
root@os-network:/var/log/quantum#