← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1464554] Re: instance failed to spawn with external network

 

The only way to get external connectivity via tenant network is to setup so-called provider network.
It would be a tenant network going through specifically configured bridges and having fixed cidr which is a part of global ipv4 pool.

Other than that VMs can't be plugged into an external network.

** Changed in: neutron
       Status: New => Incomplete

** Changed in: neutron
       Status: Incomplete => Invalid

** Changed in: nova
       Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464554

Title:
  instance failed to spawn with external network

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I'm trying to launch an instance with external network but result in
  failed status. But instances with internal network are fine.

   F ollowing is the nova-compute.log from compute node

  2015-06-12 15:22:50.899 3121 INFO nova.compute.manager [req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 700e680640e0415faf591e950cdb42d0 - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Starting instance...
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Attempting claim: memory 2048 MB, disk 50 GB
  2015-06-12 15:22:50.997 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Total memory: 515884 MB, used: 2560.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] memory limit: 773826.00 MB, free: 771266.00 MB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Total disk: 1144 GB, used: 50.00 GB
  2015-06-12 15:22:50.998 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] disk limit not specified, defaulting to unlimited
  2015-06-12 15:22:51.023 3121 INFO nova.compute.claims [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Claim successful
  2015-06-12 15:22:51.134 3121 INFO nova.scheduler.client.report [-] Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.270 3121 INFO nova.scheduler.client.report [-] Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.470 3121 INFO nova.virt.libvirt.driver [req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Creating image
  2015-06-12 15:22:51.760 3121 INFO nova.scheduler.client.report [-] Compute_service record updated for ('openstack-kvm1', 'openstack-kvm1')
  2015-06-12 15:22:51.993 3121 ERROR nova.compute.manager [req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Instance failed to spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Traceback (most recent call last):
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2442, in _build_resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     yield resources
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2314, in _build_and_run_instance
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     block_device_info=block_device_info)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2351, in spawn
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     write_to_disk=True)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4172, in _get_guest_xml
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     context)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4043, in _get_guest_config
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     flavor, virt_type)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 374, in get_config
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]     _("Unexpected vif_type=%s") % vif_type)
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] NovaException: Unexpected vif_type=binding_failed
  2015-06-12 15:22:51.993 3121 TRACE nova.compute.manager [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772]
  2015-06-12 15:22:51.995 3121 INFO nova.compute.manager [req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 700e680640e0415faf591e950cdb42d0 - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Terminating instance
  2015-06-12 15:22:52.002 3121 INFO nova.virt.libvirt.driver [-] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] During wait destroy, instance disappeared.
  2015-06-12 15:22:52.015 3121 INFO nova.virt.libvirt.driver [req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Deleting instance files /var/lib/nova/instances/8da458b4-c064-47c8-a1bb-aad4e4400772_del
  2015-06-12 15:22:52.017 3121 INFO nova.virt.libvirt.driver [req-02283432-2fd3-4835-a548-8c5bd74f4340 - - - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Deletion of /var/lib/nova/instances/8da458b4-c064-47c8-a1bb-aad4e4400772_del complete

  neuton-server.log from neutron-server node

  2015-06-12 15:22:51.437 6583 INFO neutron.wsgi [req-bd04266c-e4a1-4c5b-be8d-52a30e32b46a ] 11.11.176.41 - - [12/Jun/2015 15:22:51] "GET /v2.0/security-groups.json?tenant_id=700e680640e0415faf591e950cdb42d0 HTTP/1.1" 200 1765 0.017019
  2015-06-12 15:22:51.440 6583 INFO neutron.wsgi [req-835c2aab-e3f5-44bb-8586-8c37b6e6b228 ] 11.11.176.35 - - [12/Jun/2015 15:22:51] "GET /v2.0/extensions.json HTTP/1.1" 200 4806 0.002097
  2015-06-12 15:22:51.518 6583 INFO neutron.callbacks.manager [req-523b2d9c-8026-4c9c-9ba0-cfd30833e3b8 ] Notify callbacks for port, after_create
  2015-06-12 15:22:51.518 6583 INFO neutron.callbacks.manager [req-523b2d9c-8026-4c9c-9ba0-cfd30833e3b8 ] Calling callback neutron.db.l3_dvrscheduler_db._notify_l3_agent_new_port
  2015-06-12 15:22:51.528 6583 ERROR neutron.plugins.ml2.managers [req-523b2d9c-8026-4c9c-9ba0-cfd30833e3b8 ] Failed to bind port 68a23b41-a9dd-4d4b-88b0-359834b75f97 on host openstack-kvm1
  2015-06-12 15:22:51.529 6583 ERROR neutron.plugins.ml2.managers [req-523b2d9c-8026-4c9c-9ba0-cfd30833e3b8 ] Failed to bind port 68a23b41-a9dd-4d4b-88b0-359834b75f97 on host openstack-kvm1
  2015-06-12 15:22:51.571 6583 INFO neutron.wsgi [req-523b2d9c-8026-4c9c-9ba0-cfd30833e3b8 ] 11.11.176.41 - - [12/Jun/2015 15:22:51] "POST /v2.0/ports.json HTTP/1.1" 201 928 0.130556
  2015-06-12 15:22:51.594 6583 INFO neutron.wsgi [req-c88f52ef-e3fc-4245-9225-980ad67c8718 ] 11.11.176.41 - - [12/Jun/2015 15:22:51] "GET /v2.0/ports.json?tenant_id=700e680640e0415faf591e950cdb42d0&device_id=8da458b4-c064-47c8-a1bb-aad4e4400772 HTTP/1.1" 200 926 0.019260

  nova-scheduler.log  from nova-controller node

  2015-06-12 15:22:52.271 2673 INFO nova.filters [req-6b9424ce-2eda-
  469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66
  700e680640e0415faf591e950cdb42d0 - - -] Filter RetryFilter returned 0
  hosts

  nova-conductor.log from nova-controller node

  2015-06-12 15:22:52.252 1191 ERROR nova.scheduler.utils [req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 700e680640e0415faf591e950cdb42d0 - - -] [instance: 8da458b4-c064-47c8-a1bb-aad4e4400772] Error from last host: openstack-kvm1 (node openstack-kvm1): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2219, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2362, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 8da458b4-c064-47c8-a1bb-aad4e4400772 was re-scheduled: Unexpected vif_type=binding_failed\n']
  2015-06-12 15:22:52.275 1191 WARNING nova.scheduler.utils [req-6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66 700e680640e0415faf591e950cdb42d0 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
  Traceback (most recent call last):

    File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner
      return func(*args, **kwargs)

    File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations
      filter_properties)

    File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
      raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts
  available.

  2015-06-12 15:22:52.276 1191 WARNING nova.scheduler.utils [req-
  6b9424ce-2eda-469e-9cba-63807a8643c9 28740e72adf04dde88a2b2a1aa701e66
  700e680640e0415faf591e950cdb42d0 - - -] [instance: 8da458b4-c064-47c8
  -a1bb-aad4e4400772] Setting instance to ERROR state.


  The setup of this case is like below:

  nova-control   neutron   neutron-node  are installed on a vmware
  virtual machine, nova-control and neutron have their eth0 with a
  static ip for management, the neutron-node has eth0 for management,
  eth1 for tunnel and eth2 for external network, all the network devices
  are using dvswitch of vmware esxi and in the same vlan dvPortgroup.

  The openstack-kvm is a physical host with eth0 for management and eth1
  for tunnel, eth0 is in a different vlan from the virtual ones and eth1
  set to be in the same vlan as neutron/nova/neutron-node with a static
  Ip but the physical switch has been set to trunk mode. All the
  machines mentioned above are able to ping allocated static IPs of each
  other.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464554/+subscriptions


References