yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #23440
[Bug 1378233] [NEW] Provide an option to ignore suspended VMs in the resource count
Public bug reported:
It would be very useful for our case scenario to be able to have an
option that enables not counting suspended machines as consuming
resources. The use case is having little memory available and still
being able to launch new VMs when old VMs are in suspended mode. We
understand that once the compute node's memory is full we won't be able
to resume these machines, but that is OK with the way we're using our
cloud.
For example a compute node with 8G of RAM, launch 1 VM with 4G and
another with 2G, then suspend them both, one could then launch a new VM
with 4G of RAM (the actual memory on the compute node is free).
On essex we had the following patch for this scenario to work :
Index: nova/nova/scheduler/host_manager.py
===================================================================
--- nova.orig/nova/scheduler/host_manager.py
+++ nova/nova/scheduler/host_manager.py
@@ -337,6 +337,8 @@ class HostManager(object):
if not host:
continue
host_state = host_state_map.get(host, None)
+ if instance.get('power_state', 1) != 1: # power_state.RUNNING
+ continue
if not host_state:
continue
host_state.consume_from_instance(instance)
We're looking into patching icehouse for the same behaviour but would
like to add it as an option this time.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1378233
Title:
Provide an option to ignore suspended VMs in the resource count
Status in OpenStack Compute (Nova):
New
Bug description:
It would be very useful for our case scenario to be able to have an
option that enables not counting suspended machines as consuming
resources. The use case is having little memory available and still
being able to launch new VMs when old VMs are in suspended mode. We
understand that once the compute node's memory is full we won't be
able to resume these machines, but that is OK with the way we're using
our cloud.
For example a compute node with 8G of RAM, launch 1 VM with 4G and
another with 2G, then suspend them both, one could then launch a new
VM with 4G of RAM (the actual memory on the compute node is free).
On essex we had the following patch for this scenario to work :
Index: nova/nova/scheduler/host_manager.py
===================================================================
--- nova.orig/nova/scheduler/host_manager.py
+++ nova/nova/scheduler/host_manager.py
@@ -337,6 +337,8 @@ class HostManager(object):
if not host:
continue
host_state = host_state_map.get(host, None)
+ if instance.get('power_state', 1) != 1: # power_state.RUNNING
+ continue
if not host_state:
continue
host_state.consume_from_instance(instance)
We're looking into patching icehouse for the same behaviour but would
like to add it as an option this time.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1378233/+subscriptions
Follow ups
References