yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #74854
[Bug 1782119] Re: Downtime of volume backed live migration between two compute nodes (different version: liberty-mitaka) is too high
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]
** Changed in: nova
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782119
Title:
Downtime of volume backed live migration between two compute nodes
(different version: liberty-mitaka) is too high
Status in OpenStack Compute (nova):
Expired
Bug description:
Hi everyone,
I'm working on upgrading OpenStack from Liberty to Mitaka. I've upgraded my controller to mitaka. Mitaka controller will manage liberty computes and mitaka computes. After that I do live migration VMs from liberty compute to mitaka compute. when live migrate between two computes different version, I recognized downtime was too high (30 ICMP packets loss with 200ms interval) than two computes same version (5 ICMP packets loss with 200ms interval), summary:
live-migration between liberty-liberty computes: 5 ICMP packets loss with 200ms interval
live-migration between liberty-mitaka computes: 30 ICMP packets loss with 200ms interval
I don't know why it happened
My ENV:
1 controller mitaka
2 compute liberty
1 compute mitaka
OVS ML2 plugin with DVR
Ceph Backend Storage
Thanks
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1782119/+subscriptions
References