← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1377447] Re: pci_request_id break the upgrade from icehouse to Juno

 

Reviewed:  https://review.openstack.org/127245
Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=7c9aa6da92805f20083203a6ec8f93b1b592fc13
Submitter: Jenkins
Branch:    proposed/juno

commit 7c9aa6da92805f20083203a6ec8f93b1b592fc13
Author: He Jie Xu <xuhj@xxxxxxxxxxxxxxxxxx>
Date:   Sun Oct 5 00:20:01 2014 +0800

    Fix pci_request_id break the upgrade from icehouse to juno
    
    commit a8a5d44c8aca218f00649232c2b8a46aee59b77e add pci_request_id
    as one item for the request_network tuple. But the icehouse code
    assume only three items in the tuple.
    
    This patch filters pci_request_id out from the tuple.
    
    Cherry-Pick from:
    https://review.openstack.org/#/c/126144/6
    
    Change-Id: I991e1c68324fe92fac647583f3ec8f6aec637913
    Closes-Bug: #1377447


** Changed in: nova
       Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377447

Title:
  pci_request_id break the upgrade from icehouse to Juno

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The rpc api build_and_run_instance should be back-compatible with
  icehouse, as the code
  https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L887

  It turn the request_network object into tuple that can be understand
  by icehouse code.

  But the commit a8a5d44c8aca218f00649232c2b8a46aee59b77e change the request_network parameter. It add pci_request_id for each item. When request_network object turn it to tuple, it will add pci_request_id into
  tuple also, that make each tuple have four items. https://github.com/openstack/nova/blob/master/nova/objects/network_request.py#L37

  Old code only accept three items for each tuple:
  https://github.com/openstack/nova/blob/2014.1/nova/network/neutronv2/api.py#L237

  Then the rpc api back-compatiblity is broken.

  Then juno controller boot instance to icehouse compute node, will get error as below:
  2014-10-04 21:08:17.455 ERROR nova.compute.manager [req-58b84295-ce36-479f-b50e-dfe4f86cc1d9 admin demo] [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4] Insta
  nce failed to spawn
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4] Traceback (most recent call last):
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/compute/manager.py", line 2014
  , in _build_resources
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     yield resources
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/compute/manager.py", line 1917
  , in _build_and_run_instance
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     block_device_info=block_device_info)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line
  2246, in spawn
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     admin_pass=admin_password)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line
  2677, in _create_image
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     content=files, extra_md=extra_md, network_info=network_
  info)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/api/metadata/base.py", line 16
  5, in __init__
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     ec2utils.get_ip_info_for_instance_from_nw_info(network_info)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/api/ec2/ec2utils.py", line 147, in get_ip_info_for_instance_from_nw_info
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     fixed_ips = nw_info.fixed_ips()
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/network/model.py", line 407, in _sync_wrapper
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     self.wait()
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/network/model.py", line 439, in wait
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     self[:] = self._gt.wait()
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     return self._exit_event.wait()
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 124, in wait
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     current.throw(*self._exc)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 207, in main
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     result = function(*args, **kwargs)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/compute/manager.py", line 1510, in _allocate_network_async
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     dhcp_options=dhcp_options)
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]   File "/opt/stack/nova/nova/network/neutronv2/api.py", line 238, in allocate_for_instance
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4]     for network_id, fixed_ip, port_id in requested_networks:
  2014-10-04 21:08:17.455 TRACE nova.compute.manager [instance: c643eff5-ffd0-4eea-98a0-bce55108b0a4] ValueError: too many values to unpack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1377447/+subscriptions


References