← Back to team overview

openstack team mailing list archive

Re: Cant ping private or floating IP

 

Any new messages in nova-api.log ?

JB


On 02/17/2013 02:15 AM, Chathura M. Sarathchandra Magurawalage wrote:
> Hello JB,
>
> I changed the IP and restarted the quantum-l3-agent but still no luck :(
>
> Thanks.
>
> -----------------------------------------------------------------------------------------------------------------------------
> Chathura Madhusanka Sarathchandra Magurawalage.
> 1NW.2.1, Desk 2
> School of Computer Science and Electronic Engineering
> University Of Essex
> United Kingdom.
>
> Email: csarata@xxxxxxxxxxx <mailto:csarata@xxxxxxxxxxx>
>           chathura.sarathchandra@xxxxxxxxx <mailto:77.chathura@xxxxxxxxx>
>           77.chathura@xxxxxxxxx <mailto:77.chathura@xxxxxxxxx>
>
>
> On 17 February 2013 00:47, Jean-Baptiste RANSY
> <jean-baptiste.ransy@xxxxxxxxxx
> <mailto:jean-baptiste.ransy@xxxxxxxxxx>> wrote:
>
>     and restart quantum-l3-agent :)
>
>     JB
>
>
>     On 02/17/2013 01:46 AM, Jean-Baptiste RANSY wrote:
>>     Found !
>>
>>     On the controller node you must change the metadata_ip in
>>     /etc/quantum/l3_agent.ini
>>
>>     This params is used to create the nat rule
>>     quantum-l3-agent-PREROUTING
>>
>>     Just replace 127.0.0.1 by 192.168.2.225 and that should be ok.
>>
>>     JB
>>
>>
>>     On 02/17/2013 01:04 AM, Jean-Baptiste RANSY wrote:
>>>     Hi, Chathura
>>>
>>>     The compute node log file /var/log/nova/nova-api.log is too
>>>     light (maybe logrotate :p)
>>>
>>>     Please, clear nova-api.log, restart nova-api service, start a
>>>     new instance and wait cloud-init fail to retrieve metadata.
>>>
>>>     Thx,
>>>
>>>     JB
>>>
>>>
>>>     On 02/16/2013 11:35 PM, Chathura M. Sarathchandra Magurawalage
>>>     wrote:
>>>>     Thanks for that.
>>>>     *
>>>>     *
>>>>     *root@controller:~# ip addr show*
>>>>     1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
>>>>     UNKNOWN 
>>>>         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>         inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>>>>         inet6 ::1/128 scope host 
>>>>            valid_lft forever preferred_lft forever
>>>>     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>>>     state UP qlen 1000
>>>>         link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
>>>>         inet 10.10.10.1/24 <http://10.10.10.1/24> brd 10.10.10.255
>>>>     scope global eth0
>>>>         inet6 fe80::d6ae:52ff:febb:aa20/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>>     qlen 1000
>>>>         link/ether d4:ae:52:bb:aa:21 brd ff:ff:ff:ff:ff:ff
>>>>     4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UP 
>>>>         link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
>>>>         inet 192.168.2.225/24 <http://192.168.2.225/24> brd
>>>>     192.168.2.255 scope global eth0.2
>>>>         inet6 fe80::d6ae:52ff:febb:aa20/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     5: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>     noqueue state UNKNOWN 
>>>>         link/ether ba:7a:e9:dc:2b:41 brd ff:ff:ff:ff:ff:ff
>>>>         inet6 fe80::b87a:e9ff:fedc:2b41/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>     noqueue state UNKNOWN 
>>>>         link/ether 9a:41:c8:8a:9e:49 brd ff:ff:ff:ff:ff:ff
>>>>         inet 192.168.2.225/24 <http://192.168.2.225/24> scope
>>>>     global br-ex
>>>>     8: tapf71b5b86-5c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UNKNOWN 
>>>>         link/ether 2a:44:a3:d1:7d:f3 brd ff:ff:ff:ff:ff:ff
>>>>         inet 10.5.5.2/24 <http://10.5.5.2/24> brd 10.5.5.255 scope
>>>>     global tapf71b5b86-5c
>>>>         inet6 fe80::2844:a3ff:fed1:7df3/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     9: qr-4d088f3a-78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UNKNOWN 
>>>>         link/ether ca:5b:8d:4d:6d:fb brd ff:ff:ff:ff:ff:ff
>>>>         inet 10.5.5.1/24 <http://10.5.5.1/24> brd 10.5.5.255 scope
>>>>     global qr-4d088f3a-78
>>>>         inet6 fe80::c85b:8dff:fe4d:6dfb/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     10: qg-6f8374cb-cb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UNKNOWN 
>>>>         link/ether 0e:7f:dd:3a:80:bc brd ff:ff:ff:ff:ff:ff
>>>>         inet 192.168.2.151/24 <http://192.168.2.151/24> brd
>>>>     192.168.2.255 scope global qg-6f8374cb-cb
>>>>         inet6 fe80::c7f:ddff:fe3a:80bc/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     27: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
>>>>         link/ether 8a:cf:ec:7c:15:40 brd ff:ff:ff:ff:ff:ff
>>>>
>>>>     *cat /proc/sys/net/ipv4/ip_forward*
>>>>     1
>>>>
>>>>     *root@computenode:~# ip addr show*
>>>>     1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state
>>>>     UNKNOWN 
>>>>         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>         inet 127.0.0.1/8 <http://127.0.0.1/8> scope host lo
>>>>         inet6 ::1/128 scope host 
>>>>            valid_lft forever preferred_lft forever
>>>>     2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
>>>>     state UP qlen 1000
>>>>         link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
>>>>         inet 10.10.10.12/24 <http://10.10.10.12/24> brd
>>>>     10.10.10.255 scope global eth0
>>>>         inet6 fe80::d6ae:52ff:febb:a19d/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>>     qlen 1000
>>>>         link/ether d4:ae:52:bb:a1:9e brd ff:ff:ff:ff:ff:ff
>>>>     4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UP 
>>>>         link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
>>>>         inet 192.168.2.234/24 <http://192.168.2.234/24> brd
>>>>     192.168.2.255 scope global eth0.2
>>>>         inet6 fe80::d6ae:52ff:febb:a19d/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
>>>>         link/ether ae:9b:43:09:af:40 brd ff:ff:ff:ff:ff:ff
>>>>     9: qbr256f5ed2-43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>     qdisc noqueue state UP 
>>>>         link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
>>>>         inet6 fe80::20e8:b9ff:fe6c:6f55/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     10: qvo256f5ed2-43: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
>>>>     mtu 1500 qdisc pfifo_fast state UP qlen 1000
>>>>         link/ether 76:25:8b:fd:90:3b brd ff:ff:ff:ff:ff:ff
>>>>         inet6 fe80::7425:8bff:fefd:903b/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     11: qvb256f5ed2-43: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP>
>>>>     mtu 1500 qdisc pfifo_fast master qbr256f5ed2-43 state UP qlen 1000
>>>>         link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
>>>>         inet6 fe80::c4c0:dfff:fe64:c699/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>     13: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
>>>>         link/ether be:8c:30:78:35:48 brd ff:ff:ff:ff:ff:ff
>>>>     15: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>     pfifo_fast master qbr256f5ed2-43 state UNKNOWN qlen 500
>>>>         link/ether fe:16:3e:57:ec:ff brd ff:ff:ff:ff:ff:ff
>>>>         inet6 fe80::fc16:3eff:fe57:ecff/64 scope link 
>>>>            valid_lft forever preferred_lft forever
>>>>
>>>>     btw cronus is my compute node and I have renamed it to
>>>>     computenode to understand it better.
>>>>
>>>>     On 16 February 2013 22:11, Jean-Baptiste RANSY
>>>>     <jean-baptiste.ransy@xxxxxxxxxx
>>>>     <mailto:jean-baptiste.ransy@xxxxxxxxxx>> wrote:
>>>>
>>>>         Download in progress
>>>>
>>>>         Can you send me the output of those commands i forgot :
>>>>
>>>>         Controller Node:
>>>>         $ ip addr show
>>>>         $ cat /proc/sys/net/ipv4/ip_forward
>>>>
>>>>         Compute Node:
>>>>         $ ip addr show
>>>>
>>>>
>>>>         JB
>>>>
>>>>
>>>>
>>>>         On 02/16/2013 10:45 PM, Chathura M. Sarathchandra
>>>>         Magurawalage wrote:
>>>>>         Thanks Ransy,
>>>>>
>>>>>         I have created a tar file with the configuration and log
>>>>>         files in it. Please download it using the following URL. I
>>>>>         have pasted the output of the commands below.
>>>>>
>>>>>         https://www.dropbox.com/s/qyfcsn50060y304/confilesnlogs.tar
>>>>>
>>>>>         *Controller node:*
>>>>>         *root@controller:~# keystone endpoint-list*
>>>>>         +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+Controller
>>>>>         node
>>>>>         |                id                |   region  |          
>>>>>                  publicurl                    |                  
>>>>>         internalurl                   |                  adminurl
>>>>>                          |
>>>>>         +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
>>>>>         | 2c9a1cb0fe8247d9b7716432cf459fe5 | RegionOne |  
>>>>>          http://192.168.2.225:8774/v2/$(tenant_id)s
>>>>>         <http://192.168.2.225:8774/v2/$%28tenant_id%29s>   |  
>>>>>          http://192.168.2.225:8774/v2/$(tenant_id)s
>>>>>         <http://192.168.2.225:8774/v2/$%28tenant_id%29s>   |
>>>>>         http://192.168.2.225:8774/v2/$(tenant_id)s
>>>>>         <http://192.168.2.225:8774/v2/$%28tenant_id%29s> |
>>>>>         | 2d306903ed3342a8aaaac7c5680c116f | RegionOne |          
>>>>>          http://192.168.2.225:9696/           |          
>>>>>          http://192.168.2.225:9696/           |        
>>>>>         http://192.168.2.225:9696/         |
>>>>>         | 3848114f120f42bf819bc2443b28ac9e | RegionOne |
>>>>>         http://192.168.2.225:8080/v1/AUTH_$(tenant_id)s
>>>>>         <http://192.168.2.225:8080/v1/AUTH_$%28tenant_id%29s> |
>>>>>         http://192.168.2.225:8080/v1/AUTH_$(tenant_id)s
>>>>>         <http://192.168.2.225:8080/v1/AUTH_$%28tenant_id%29s> |  
>>>>>              http://192.168.2.225:8080/v1        |
>>>>>         | 4955173b8d9e4d33ae4a5b29dc12c74d | RegionOne |  
>>>>>          http://192.168.2.225:8776/v1/$(tenant_id)s
>>>>>         <http://192.168.2.225:8776/v1/$%28tenant_id%29s>   |  
>>>>>          http://192.168.2.225:8776/v1/$(tenant_id)s
>>>>>         <http://192.168.2.225:8776/v1/$%28tenant_id%29s>   |
>>>>>         http://192.168.2.225:8776/v1/$(tenant_id)s
>>>>>         <http://192.168.2.225:8776/v1/$%28tenant_id%29s> |
>>>>>         | d313aa76bf854dde94f33a49a9f0c8ac | RegionOne |          
>>>>>         http://192.168.2.225:9292/v2          |          
>>>>>         http://192.168.2.225:9292/v2          |      
>>>>>          http://192.168.2.225:9292/v2        |
>>>>>         | e5aa4ecf3cbe4dd5aba9b204c74fee6a | RegionOne |        
>>>>>          http://192.168.2.225:5000/v2.0         |        
>>>>>          http://192.168.2.225:5000/v2.0         |    
>>>>>          http://192.168.2.225:35357/v2.0       |
>>>>>         | fba6f790e3b444c890d114f13cd32b37 | RegionOne |    
>>>>>         http://192.168.2.225:8773/services/Cloud    |    
>>>>>         http://192.168.2.225:8773/services/Cloud    |
>>>>>          http://192.168.2.225:8773/services/Admin  |
>>>>>         +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+--------------------------------------------+
>>>>>
>>>>>         *root@controller:~# ip link show*
>>>>>         1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
>>>>>         state UNKNOWN 
>>>>>             link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>>         2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>>         mq state UP qlen 1000
>>>>>             link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
>>>>>         3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
>>>>>         DOWN qlen 1000
>>>>>             link/ether d4:ae:52:bb:aa:21 brd ff:ff:ff:ff:ff:ff
>>>>>         4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>>         qdisc noqueue state UP 
>>>>>             link/ether d4:ae:52:bb:aa:20 brd ff:ff:ff:ff:ff:ff
>>>>>         5: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>>         qdisc noqueue state UNKNOWN 
>>>>>             link/ether ba:7a:e9:dc:2b:41 brd ff:ff:ff:ff:ff:ff
>>>>>         7: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>>         noqueue state UNKNOWN 
>>>>>             link/ether 9a:41:c8:8a:9e:49 brd ff:ff:ff:ff:ff:ff
>>>>>         8: tapf71b5b86-5c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
>>>>>         1500 qdisc noqueue state UNKNOWN 
>>>>>             link/ether 2a:44:a3:d1:7d:f3 brd ff:ff:ff:ff:ff:ff
>>>>>         9: qr-4d088f3a-78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
>>>>>         1500 qdisc noqueue state UNKNOWN 
>>>>>             link/ether ca:5b:8d:4d:6d:fb brd ff:ff:ff:ff:ff:ff
>>>>>         10: qg-6f8374cb-cb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
>>>>>         1500 qdisc noqueue state UNKNOWN 
>>>>>             link/ether 0e:7f:dd:3a:80:bc brd ff:ff:ff:ff:ff:ff
>>>>>         27: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>>>>>         state DOWN 
>>>>>             link/ether 8a:cf:ec:7c:15:40 brd ff:ff:ff:ff:ff:ff
>>>>>
>>>>>         *root@controller:~# ip route show*
>>>>>         default via 192.168.2.253 dev eth0.2 
>>>>>         default via 192.168.2.253 dev eth0.2  metric 100 
>>>>>         10.5.5.0/24 <http://10.5.5.0/24> dev tapf71b5b86-5c  proto
>>>>>         kernel  scope link  src 10.5.5.2 
>>>>>         10.5.5.0/24 <http://10.5.5.0/24> dev qr-4d088f3a-78  proto
>>>>>         kernel  scope link  src 10.5.5.1 
>>>>>         10.10.10.0/24 <http://10.10.10.0/24> dev eth0  proto
>>>>>         kernel  scope link  src 10.10.10.1 
>>>>>         192.168.2.0/24 <http://192.168.2.0/24> dev eth0.2  proto
>>>>>         kernel  scope link  src 192.168.2.225 
>>>>>         192.168.2.0/24 <http://192.168.2.0/24> dev qg-6f8374cb-cb
>>>>>          proto kernel  scope link  src 192.168.2.151 
>>>>>         192.168.2.0/24 <http://192.168.2.0/24> dev br-ex  proto
>>>>>         kernel  scope link  src 192.168.2.225
>>>>>
>>>>>         *$ ip netns show (Did not return anything)*
>>>>>
>>>>>         *root@controller:~# ovs-vsctl show*
>>>>>         a566afae-d7a8-42a9-aefe-8b0f2f7054a3
>>>>>             Bridge br-tun
>>>>>                 Port "gre-4"
>>>>>                     Interface "gre-4"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="10.10.10.12"}
>>>>>                 Port "gre-3"
>>>>>                     Interface "gre-3"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="127.0.0.1"}
>>>>>                 Port patch-int
>>>>>                     Interface patch-int
>>>>>                         type: patch
>>>>>                         options: {peer=patch-tun}
>>>>>                 Port br-tun
>>>>>                     Interface br-tun
>>>>>                         type: internal
>>>>>                 Port "gre-1"
>>>>>                     Interface "gre-1"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="10.0.0.3"}
>>>>>             Bridge br-ex
>>>>>                 Port br-ex
>>>>>                     Interface br-ex
>>>>>                         type: internal
>>>>>                 Port "qg-6f8374cb-cb"
>>>>>                     Interface "qg-6f8374cb-cb"
>>>>>                         type: internal
>>>>>                 Port "br0"
>>>>>                     Interface "br0"
>>>>>             Bridge br-int
>>>>>                 Port br-int
>>>>>                     Interface br-int
>>>>>                         type: internal
>>>>>                 Port "tapf71b5b86-5c"
>>>>>                     tag: 1
>>>>>                     Interface "tapf71b5b86-5c"
>>>>>                         type: internal
>>>>>                 Port patch-tun
>>>>>                     Interface patch-tun
>>>>>                         type: patch
>>>>>                         options: {peer=patch-int}
>>>>>                 Port "qr-4d088f3a-78"
>>>>>                     tag: 1
>>>>>                     Interface "qr-4d088f3a-78"
>>>>>                         type: internal
>>>>>             ovs_version: "1.4.0+build0"
>>>>>
>>>>>
>>>>>         *Compute node:*
>>>>>
>>>>>         *root@cronus:~# ip link show*
>>>>>         1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
>>>>>         state UNKNOWN 
>>>>>             link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>>         2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>>>>         mq state UP qlen 1000
>>>>>             link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
>>>>>         3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
>>>>>         DOWN qlen 1000
>>>>>             link/ether d4:ae:52:bb:a1:9e brd ff:ff:ff:ff:ff:ff
>>>>>         4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>>         qdisc noqueue state UP 
>>>>>             link/ether d4:ae:52:bb:a1:9d brd ff:ff:ff:ff:ff:ff
>>>>>         5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state
>>>>>         DOWN 
>>>>>             link/ether ae:9b:43:09:af:40 brd ff:ff:ff:ff:ff:ff
>>>>>         9: qbr256f5ed2-43: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
>>>>>         1500 qdisc noqueue state UP 
>>>>>             link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
>>>>>         10: qvo256f5ed2-43:
>>>>>         <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>>>>>         pfifo_fast state UP qlen 1000
>>>>>             link/ether 76:25:8b:fd:90:3b brd ff:ff:ff:ff:ff:ff
>>>>>         11: qvb256f5ed2-43:
>>>>>         <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>>>>>         pfifo_fast master qbr256f5ed2-43 state UP qlen 1000
>>>>>             link/ether c6:c0:df:64:c6:99 brd ff:ff:ff:ff:ff:ff
>>>>>         13: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
>>>>>         state DOWN 
>>>>>             link/ether be:8c:30:78:35:48 brd ff:ff:ff:ff:ff:ff
>>>>>         15: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
>>>>>         qdisc pfifo_fast master qbr256f5ed2-43 state UNKNOWN qlen 500
>>>>>             link/ether fe:16:3e:57:ec:ff brd ff:ff:ff:ff:ff:ff
>>>>>
>>>>>         *root@cronus:~# ip route show*
>>>>>         default via 192.168.2.253 dev eth0.2  metric 100 
>>>>>         10.10.10.0/24 <http://10.10.10.0/24> dev eth0  proto
>>>>>         kernel  scope link  src 10.10.10.12 
>>>>>         192.168.2.0/24 <http://192.168.2.0/24> dev eth0.2  proto
>>>>>         kernel  scope link  src 192.168.2.234 
>>>>>
>>>>>         *root@cronus:~# ovs-vsctl show*
>>>>>         d85bc334-6d64-4a13-b851-d56b18ff1549
>>>>>             Bridge br-int
>>>>>                 Port "qvo0e743b01-89"
>>>>>                     tag: 4095
>>>>>                     Interface "qvo0e743b01-89"
>>>>>                 Port "qvo256f5ed2-43"
>>>>>                     tag: 1
>>>>>                     Interface "qvo256f5ed2-43"
>>>>>                 Port patch-tun
>>>>>                     Interface patch-tun
>>>>>                         type: patch
>>>>>                         options: {peer=patch-int}
>>>>>                 Port br-int
>>>>>                     Interface br-int
>>>>>                         type: internal
>>>>>                 Port "qvoee3d4131-2a"
>>>>>                     tag: 4095
>>>>>                     Interface "qvoee3d4131-2a"
>>>>>                 Port "qvocbc816bd-3d"
>>>>>                     tag: 4095
>>>>>                     Interface "qvocbc816bd-3d"
>>>>>             Bridge br-tun
>>>>>                 Port br-tun
>>>>>                     Interface br-tun
>>>>>                         type: internal
>>>>>                 Port "gre-2"
>>>>>                     Interface "gre-2"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="10.10.10.1"}
>>>>>                 Port "gre-1"
>>>>>                     Interface "gre-1"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="10.0.0.3"}
>>>>>                 Port patch-int
>>>>>                     Interface patch-int
>>>>>                         type: patch
>>>>>                         options: {peer=patch-tun}
>>>>>                 Port "gre-3"
>>>>>                     Interface "gre-3"
>>>>>                         type: gre
>>>>>                         options: {in_key=flow, out_key=flow,
>>>>>         remote_ip="127.0.0.1"}
>>>>>             ovs_version: "1.4.0+build0"
>>>>>
>>>>>
>>>>>         Thanks I appreciate your help.
>>>>>
>>>>>         On 16 February 2013 16:49, Jean-Baptiste RANSY
>>>>>         <jean-baptiste.ransy@xxxxxxxxxx
>>>>>         <mailto:jean-baptiste.ransy@xxxxxxxxxx>> wrote:
>>>>>
>>>>>             Please provide files listed bellow :
>>>>>
>>>>>             Controller Node :
>>>>>             /etc/nova/nova.conf
>>>>>             /etc/nova/api-paste.ini
>>>>>             /etc/quantum/l3_agent.ini
>>>>>             /etc/quantum/quantum.conf
>>>>>             /etc/quantum/dhcp_agent.ini
>>>>>             /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
>>>>>             /etc/quantum/api-paste.ini
>>>>>             /var/log/nova/*.log
>>>>>             /var/log/quantum/*.log
>>>>>
>>>>>             Compute Node :
>>>>>             /etc/nova/nova.conf
>>>>>             /etc/nova/nova-compute.conf
>>>>>             /etc/nova/api-paste.ini
>>>>>             /etc/quantum/quantum.conf
>>>>>             /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
>>>>>             /var/log/nova/*.log
>>>>>             /var/log/quantum/*.log
>>>>>
>>>>>             Plus, complete output of the following commands :
>>>>>
>>>>>             Controller Node :
>>>>>             $ keystone endpoint-list
>>>>>             $ ip link show
>>>>>             $ ip route show
>>>>>             $ ip netns show
>>>>>             $ ovs-vsctl show
>>>>>
>>>>>             Compute Node :
>>>>>             $ ip link show
>>>>>             $ ip route show
>>>>>             $ ovs-vsctl show
>>>>>
>>>>>             Regards,
>>>>>
>>>>>             Jean-Baptiste RANSY
>>>>>
>>>>>
>>>>>
>>>>>             On 02/16/2013 05:32 PM, Chathura M. Sarathchandra
>>>>>             Magurawalage wrote:
>>>>>>             Hello Jean,
>>>>>>
>>>>>>             Thanks for your reply.
>>>>>>
>>>>>>             I followed the instructions
>>>>>>             in http://docs.openstack.org/folsom/basic-install/content/basic-install_network.html.
>>>>>>             And my Controller and the Network-node is installed
>>>>>>             in the same physical node.
>>>>>>
>>>>>>             I am using Folsom but without Network namespaces. 
>>>>>>
>>>>>>             But in the website you have provided it states that
>>>>>>             "If you run both L3 + DHCP services on the same node,
>>>>>>             you should enable namespaces to avoid conflicts with
>>>>>>             routes :"
>>>>>>
>>>>>>             But currently quantum-dhcp-agent and quantum-l3-agent
>>>>>>             are running in the same node? 
>>>>>>
>>>>>>             Additionally the control node serves as a DHCP server
>>>>>>             for the local network ( Don't know if that would make
>>>>>>             and difference)
>>>>>>
>>>>>>             Any idea what the problem could be?
>>>>>>
>>>>>>
>>>>>>             On 16 February 2013 16:21, Jean-Baptiste RANSY
>>>>>>             <jean-baptiste.ransy@xxxxxxxxxx
>>>>>>             <mailto:jean-baptiste.ransy@xxxxxxxxxx>> wrote:
>>>>>>
>>>>>>                 Hello Chathura,
>>>>>>
>>>>>>                 Are you using Folsom with Network Namespaces ?
>>>>>>
>>>>>>                 If yes, have a look here :
>>>>>>                 http://docs.openstack.org/folsom/openstack-network/admin/content/ch_limitations.html
>>>>>>
>>>>>>
>>>>>>                 Regards,
>>>>>>
>>>>>>                 Jean-Baptsite RANSY
>>>>>>
>>>>>>
>>>>>>
>>>>>>                 On 02/16/2013 05:01 PM, Chathura M. Sarathchandra
>>>>>>                 Magurawalage wrote:
>>>>>>>                 Hello guys,
>>>>>>>
>>>>>>>                 The problem still exists. Any ideas?
>>>>>>>
>>>>>>>                 Thanks 
>>>>>>>
>>>>>>>                 On 15 February 2013 14:37, Sylvain Bauza
>>>>>>>                 <sylvain.bauza@xxxxxxxxxxxx
>>>>>>>                 <mailto:sylvain.bauza@xxxxxxxxxxxx>> wrote:
>>>>>>>
>>>>>>>                     Metadata API allows to fetch SSH credentials
>>>>>>>                     when booting (pubkey I mean).
>>>>>>>                     If a VM is unable to reach metadata service,
>>>>>>>                     then it won't be able to get its public key,
>>>>>>>                     so you won't be able to connect, unless you
>>>>>>>                     specifically go thru a Password
>>>>>>>                     authentication (provided password auth is
>>>>>>>                     enabled in /etc/ssh/sshd_config, which is
>>>>>>>                     not the case with Ubuntu cloud archive).
>>>>>>>                     There is also a side effect, the boot
>>>>>>>                     process is longer as the instance is waiting
>>>>>>>                     for the curl timeout (60sec.) to finish
>>>>>>>                     booting up.
>>>>>>>
>>>>>>>                     Re: Quantum, the metadata API is actually
>>>>>>>                     DNAT'd from Network node to the Nova-api
>>>>>>>                     node (here 172.16.0.1 as internal management
>>>>>>>                     IP) :
>>>>>>>                     Chain quantum-l3-agent-PREROUTING (1
>>>>>>>                     references)
>>>>>>>
>>>>>>>                     target     prot opt source              
>>>>>>>                     destination
>>>>>>>                     DNAT       tcp  --  0.0.0.0/0
>>>>>>>                     <http://0.0.0.0/0>          
>>>>>>>                      169.254.169.254      tcp dpt:80
>>>>>>>                     to:172.16.0.1:8775 <http://172.16.0.1:8775>
>>>>>>>
>>>>>>>
>>>>>>>                     Anyway, the first step is to :
>>>>>>>                     1. grab the console.log
>>>>>>>                     2. access thru VNC to the desired instance
>>>>>>>
>>>>>>>                     Troubleshooting will be easier once that done.
>>>>>>>
>>>>>>>                     -Sylvain
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                     Le 15/02/2013 14:24, Chathura M.
>>>>>>>                     Sarathchandra Magurawalage a écrit :
>>>>>>>
>>>>>>>                         Hello Guys,
>>>>>>>
>>>>>>>                         Not sure if this is the right port but
>>>>>>>                         these are the results:
>>>>>>>
>>>>>>>                         *Compute node:*
>>>>>>>
>>>>>>>
>>>>>>>                         root@computenode:~# netstat -an | grep 8775
>>>>>>>                         tcp        0      0 0.0.0.0:8775
>>>>>>>                         <http://0.0.0.0:8775>
>>>>>>>                         <http://0.0.0.0:8775>  0.0.0.0:*        
>>>>>>>                               LISTEN
>>>>>>>
>>>>>>>                         *Controller: *
>>>>>>>
>>>>>>>
>>>>>>>                         root@controller:~# netstat -an | grep 8775
>>>>>>>                         tcp        0      0 0.0.0.0:8775
>>>>>>>                         <http://0.0.0.0:8775>
>>>>>>>                         <http://0.0.0.0:8775>  0.0.0.0:*        
>>>>>>>                               LISTEN
>>>>>>>
>>>>>>>                         *Additionally I cant curl
>>>>>>>                         169.254.169.254 from the compute node. I
>>>>>>>                         am not sure if this is related to not
>>>>>>>                         being able to PING the VM.*
>>>>>>>
>>>>>>>
>>>>>>>                         curl -v http://169.254.169.254
>>>>>>>                         * About to connect() to 169.254.169.254
>>>>>>>                         port 80 (#0)
>>>>>>>                         *   Trying 169.254.169.254...
>>>>>>>
>>>>>>>                         Thanks for your help
>>>>>>>
>>>>>>>
>>>>>>>                         -----------------------------------------------------------------------------------------------------------------------------
>>>>>>>                         Chathura Madhusanka Sarathchandra
>>>>>>>                         Magurawalage.
>>>>>>>                         1NW.2.1, Desk 2
>>>>>>>                         School of Computer Science and
>>>>>>>                         Electronic Engineering
>>>>>>>                         University Of Essex
>>>>>>>                         United Kingdom.
>>>>>>>
>>>>>>>                         Email: csarata@xxxxxxxxxxx
>>>>>>>                         <mailto:csarata@xxxxxxxxxxx>
>>>>>>>                         <mailto:csarata@xxxxxxxxxxx
>>>>>>>                         <mailto:csarata@xxxxxxxxxxx>>
>>>>>>>                                  
>>>>>>>                         chathura.sarathchandra@xxxxxxxxx
>>>>>>>                         <mailto:chathura.sarathchandra@xxxxxxxxx> <mailto:77.chathura@xxxxxxxxx
>>>>>>>                         <mailto:77.chathura@xxxxxxxxx>>
>>>>>>>                         77.chathura@xxxxxxxxx
>>>>>>>                         <mailto:77.chathura@xxxxxxxxx>
>>>>>>>                         <mailto:77.chathura@xxxxxxxxx
>>>>>>>                         <mailto:77.chathura@xxxxxxxxx>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                         On 15 February 2013 11:03, Anil Vishnoi
>>>>>>>                         <vishnoianil@xxxxxxxxx
>>>>>>>                         <mailto:vishnoianil@xxxxxxxxx>
>>>>>>>                         <mailto:vishnoianil@xxxxxxxxx
>>>>>>>                         <mailto:vishnoianil@xxxxxxxxx>>> wrote:
>>>>>>>
>>>>>>>                             If you are using ubuntu cloud image
>>>>>>>                         then the only way to log-in is
>>>>>>>                             to do ssh with the public key. For
>>>>>>>                         that you have to create ssh key
>>>>>>>                             pair and download the ssh key. You
>>>>>>>                         can create this ssh pair using
>>>>>>>                             horizon/cli.
>>>>>>>
>>>>>>>
>>>>>>>                             On Fri, Feb 15, 2013 at 4:27 PM,
>>>>>>>                         Sylvain Bauza
>>>>>>>                             <sylvain.bauza@xxxxxxxxxxxx
>>>>>>>                         <mailto:sylvain.bauza@xxxxxxxxxxxx>
>>>>>>>                         <mailto:sylvain.bauza@xxxxxxxxxxxx
>>>>>>>                         <mailto:sylvain.bauza@xxxxxxxxxxxx>>>
>>>>>>>
>>>>>>>                             wrote:
>>>>>>>
>>>>>>>
>>>>>>>                                 Le 15/02/2013 11:42, Chathura M.
>>>>>>>                         Sarathchandra Magurawalage a
>>>>>>>                                 écrit :
>>>>>>>
>>>>>>>
>>>>>>>                                     How can I log into the VM
>>>>>>>                         from VNC? What are the credentials?
>>>>>>>
>>>>>>>
>>>>>>>                                 You have multiple ways to get
>>>>>>>                         VNC access. The easiest one is
>>>>>>>                                 thru Horizon. Other can be
>>>>>>>                         looking at the KVM command-line for
>>>>>>>                                 the desired instance (on the
>>>>>>>                         compute node) and check the vnc
>>>>>>>                                 port in use (assuming KVM as
>>>>>>>                         hypervisor).
>>>>>>>                                 This is basic knowledge of Nova.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                                     nova-api-metadata is running
>>>>>>>                         fine in the compute node.
>>>>>>>
>>>>>>>
>>>>>>>                                 Make sure the metadata port is
>>>>>>>                         avaible thanks to telnet or
>>>>>>>                                 netstat, nova-api can be running
>>>>>>>                         without listening on metadata
>>>>>>>                                 port.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                                
>>>>>>>                         _______________________________________________
>>>>>>>                                 Mailing list:
>>>>>>>                         https://launchpad.net/~openstack
>>>>>>>                         <https://launchpad.net/%7Eopenstack>
>>>>>>>                                 <https://launchpad.net/%7Eopenstack>
>>>>>>>                                 Post to     :
>>>>>>>                         openstack@xxxxxxxxxxxxxxxxxxx
>>>>>>>                         <mailto:openstack@xxxxxxxxxxxxxxxxxxx>
>>>>>>>                                
>>>>>>>                         <mailto:openstack@xxxxxxxxxxxxxxxxxxx
>>>>>>>                         <mailto:openstack@xxxxxxxxxxxxxxxxxxx>>
>>>>>>>                                 Unsubscribe :
>>>>>>>                         https://launchpad.net/~openstack
>>>>>>>                         <https://launchpad.net/%7Eopenstack>
>>>>>>>                                
>>>>>>>                         <https://launchpad.net/%7Eopenstack>
>>>>>>>
>>>>>>>                                 More help   :
>>>>>>>                         https://help.launchpad.net/ListHelp
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                             --     Thanks & Regards
>>>>>>>                             --Anil Kumar Vishnoi
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                 _______________________________________________
>>>>>>>                 Mailing list: https://launchpad.net/~openstack
>>>>>>>                 <https://launchpad.net/%7Eopenstack>
>>>>>>>                 Post to : openstack@xxxxxxxxxxxxxxxxxxx
>>>>>>>                 <mailto:openstack@xxxxxxxxxxxxxxxxxxx>
>>>>>>>                 Unsubscribe : https://launchpad.net/~openstack
>>>>>>>                 <https://launchpad.net/%7Eopenstack> More help :
>>>>>>>                 https://help.launchpad.net/ListHelp
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>
>


Follow ups

References