← Back to team overview

openstack team mailing list archive

Re: instance metadata timeout

 

Hi Naveen,

There are probably a couple of things going on here.

1) When using quantum, the L3 forwarding + NAT are actually handled by
the quantum-l3-agent, not nova-network (in fact you shouldn't run
nova-network at all when you're using Quantum in Folsom).  You should
make sure the l3_agent.ini have metadata_ip and metadata_port
configured to map to your nova-api server and port.

2) The quantum-l3-agent supports the creation of many "routers" each
with potentially overlapping IPs on a single linux host using network
namespaces.  This is the default configuration, but it creates several
complications when working with nova's metadata server, which assumes
a pretty simple network model with a single router.  quantum-l3-agent
can run in a mode more akin to nova's L3 model by disabling namespace
setting use_namespaces=False in the l3_agent.ini .  Beware that doing
so will mean all configuration of routes done by the quantum-l3-agent
will affect data forwarding for the entire host (i.e., it may steal
your default route).  Running quantum-l3-agent with namespaces
disabled and running nova-api on the same host should map to the more
traditional nova-network setup.

Are you using devstack?  If so, I think there are some changes we
should make to devstack to make it easier to use Quantum in a fashion
that maps to traditional nova networking for L3 + NAT.  I've heard
others mention that they are confused about why the default Quantum
setup does not let them SSH directly to VMs via their fixed IPs, and
the use of namespaces is the root cause for this as well.  Will post a
possible patch for this soon.

Dan



On Wed, Sep 12, 2012 at 12:02 PM, Naveen Joy (najoy) <najoy@xxxxxxxxx> wrote:
> Hi All,
>
>
>
> My instances are timing out while obtaining their meta-data. They are being
> spawned on the same controller node in which I am running nova-network and
> nova-api services. The networks are being provisioned through the Quantum V2
> API. I have enabled meta-data in my nova.conf. Any thoughts on how to
> resolve this issue?. Thanks /
>
>
>
> cloud-setup: checking
> http://169.254.169.254/2009-04-04/meta-data/instance-id
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 1/30: up 1.27. request failed
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 2/30: up 191.69. request failed
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 3/30: up 382.15. request failed
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 4/30: up 572.61. request failed
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 5/30: up 763.08. request failed
>
> wget: can't connect to remote host (169.254.169.254): Connection timed out
>
> cloud-setup: failed 6/30: up 953.54. request failed
>
>
>
> sudo grep -i metadata /etc/nova/nova.conf
>
> enabled_apis=ec2,osapi_compute,osapi_volume,metadata
>
> metadata_host=$my_ip
>
> #### (StrOpt) the ip for the metadata api server
>
> metadata_port=8775
>
> #### (IntOpt) the port for the metadata api port
>
> # quota_metadata_items=128
>
> #### (IntOpt) number of metadata items allowed per instance
>
> metadata_manager=nova.api.manager.MetadataManager
>
> #### (StrOpt) OpenStack metadata service manager
>
> metadata_listen=0.0.0.0
>
> #### (StrOpt) IP address for metadata api to listen
>
> metadata_listen_port=8775
>
> #### (IntOpt) port for metadata api to listen
>
> # metadata_workers=0
>
> #### (IntOpt) Number of workers for metadata API
>
>
>
>
>
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~


References