← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1417678] Re: TestRemoteInstanceObject randomly failing

 

** Changed in: nova
       Status: Fix Committed => Fix Released

** Changed in: nova
    Milestone: None => kilo-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1417678

Title:
  TestRemoteInstanceObject randomly failing

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I started hitting this kind of failure locally, but intermittently:

  nova.tests.unit.objects.test_instance.TestRemoteInstanceObject.test_migrate_flavor
  ----------------------------------------------------------------------------------

  Captured traceback:
  ˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
      Traceback (most recent call last):
        File "nova/tests/unit/objects/test_instance.py", line 1027, in test_migrate_flavor
          self.assertNotIn('instance_type_id', inst.system_metadata)
        File "/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 392, in assertNotIn
          self.assertThat(haystack, matcher, message)
        File "/home/mriedem/git/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py", line 433, in assertThat
          raise mismatch_error
      MismatchError: {u'old_instance_type_flavorid': u'2', u'old_instance_type_vcpus': u'1', u'instance_type_name': u'm1.small', u'instance_type_extra_hw:numa_cpus.1': u'123', u'old_instance_type_vcpu_weight': u'0', u'instance_type_ephemeral_gb': u'0', u'old_instance_type_swap': u'0', u'old_instance_type_id': u'5', u'old_instance_type_ephemeral_gb': u'0', u'old_instance_type_rxtx_factor': u'1.0', u'instance_type_vcpu_weight': u'0', u'instance_type_root_gb': u'20', u'instance_type_id': u'5', u'old_instance_type_root_gb': u'20', u'instance_type_rxtx_factor': u'1.0', u'instance_type_vcpus': u'1', u'instance_type_memory_mb': u'2048', u'instance_type_swap': u'0', u'old_instance_type_memory_mb': u'2048', u'old_instance_type_name': u'm1.small', u'old_instance_type_extra_hw:numa_cpus.1': u'1', u'instance_type_flavorid': u'2'} matches Contains('instance_type_id')
      Traceback (most recent call last):
      _StringException: Empty attachments:
        stderr
        stdout

  I'm also seeing it in the community jenkins test runs (31 hits on the
  check queue going back to 1/30):

  http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOm5vdmEudGVzdHMudW5pdC5vYmplY3RzLnRlc3RfaW5zdGFuY2UuVGVzdFJlbW90ZUluc3RhbmNlT2JqZWN0KiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIEFORCB0YWdzOlwiY29uc29sZVwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI3XCIgIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTUtMDEtMjBUMTc6MjQ6NDcrMDA6MDAiLCJ0byI6IjIwMTUtMDItMDNUMTc6MjQ6NDcrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTQyMjk4NDQ0MjIzNywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  I believe it coincides with merging this:

  https://review.openstack.org/#/c/135700/

  The failure rates aren't high, but I think it's an issue.

  The failures appear to only be in this remote instance objects test,
  so maybe something goofing up with rpc/db interactions being slowed
  down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1417678/+subscriptions


References