yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #57886
[Bug 1633990] [NEW] resize or migrate instance will failed if two compute host are set in different rbd pool
Public bug reported:
We are now facing a nova operation issue about setting different ceph rbd pool to each corresponding nova compute node in one available zone. For instance:
(1) compute-node-1 in az1 and set images_rbd_pool=pool1
(2) compute-node-2 in az1 and set images_rbd_pool=pool2
This setting can normally work fine.
But problem encountered when doing resize/migrate instance. For instance, when try to resize an instance-1 originally in compute-node-1, then nova will do schedule procedure, assuming that nova-scheduler get the chosen compute node is compute-node-2. Then the nova will get the following error:
http://paste.openstack.org/show/585540/.
This exception is because that in compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can handle this? Similar thing in cinder, you may see a cinder volume has host attribute like:
host_name@pool_name#ceph.
Why we use such setting is because that while doing storage capacity
expansion we want to avoid the influence of ceph rebalance.
One solution I found is AggregateInstanceExtraSpecsFilter, this can coordinate working with Host Aggregates metadata and flavor metadata.
We try to create Host Aggregates like:
az1-pool1 with hosts compute-node-1, and metadata {ceph_pool: pool1};
az1-pool2 with hosts compute-node-2, and metadata {ceph_pool: pool2};
and create flavors like:
flavor1-pool1 with metadata {ceph_pool: pool1};
flavor2-pool1 with metadata {ceph_pool: pool1};
flavor1-pool2 with metadata {ceph_pool: pool2};
flavor2-pool2 with metadata {ceph_pool: pool2};
But this may introduce a new issue about the create_instance. Which
flavor should be used? The business/application layer seems need to add
it's own flavor scheduler.
So here finally, I want to ask, if there is a best practice about using
multiple ceph rbd pools in one available zone.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633990
Title:
resize or migrate instance will failed if two compute host are set in
different rbd pool
Status in OpenStack Compute (nova):
New
Bug description:
We are now facing a nova operation issue about setting different ceph rbd pool to each corresponding nova compute node in one available zone. For instance:
(1) compute-node-1 in az1 and set images_rbd_pool=pool1
(2) compute-node-2 in az1 and set images_rbd_pool=pool2
This setting can normally work fine.
But problem encountered when doing resize/migrate instance. For instance, when try to resize an instance-1 originally in compute-node-1, then nova will do schedule procedure, assuming that nova-scheduler get the chosen compute node is compute-node-2. Then the nova will get the following error:
http://paste.openstack.org/show/585540/.
This exception is because that in compute-node-2 nova can't find pool1 vm1 disk. So is there a way nova can handle this? Similar thing in cinder, you may see a cinder volume has host attribute like:
host_name@pool_name#ceph.
Why we use such setting is because that while doing storage capacity
expansion we want to avoid the influence of ceph rebalance.
One solution I found is AggregateInstanceExtraSpecsFilter, this can coordinate working with Host Aggregates metadata and flavor metadata.
We try to create Host Aggregates like:
az1-pool1 with hosts compute-node-1, and metadata {ceph_pool: pool1};
az1-pool2 with hosts compute-node-2, and metadata {ceph_pool: pool2};
and create flavors like:
flavor1-pool1 with metadata {ceph_pool: pool1};
flavor2-pool1 with metadata {ceph_pool: pool1};
flavor1-pool2 with metadata {ceph_pool: pool2};
flavor2-pool2 with metadata {ceph_pool: pool2};
But this may introduce a new issue about the create_instance. Which
flavor should be used? The business/application layer seems need to
add it's own flavor scheduler.
So here finally, I want to ask, if there is a best practice about
using multiple ceph rbd pools in one available zone.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633990/+subscriptions
Follow ups