openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #01770
distributed and heterogeneous schedulers
I'm trying to understand how best to implement our architecture-aware scheduler for Diablo:
https://blueprints.launchpad.net/nova/+spec/schedule-instances-on-heterogeneous-architectures
Right now our scheduler is similar in approach to SimpleScheduler with a few extra filters on instances and compute_nodes table queries for the cpu_arch and xpu_arch fields that we added. For example, for "-t cg1.4xlarge" GPU instance type the scheduler reads instance_types.cpu_arch="x86_64" and instance_types.xpu_arch = "fermi", then filters the respective compute_node and instance fields. http://wiki.openstack.org/HeterogeneousInstanceTypes
That's OK for Cactus, but going beyond that, I'm struggling to reconcile these different blueprints:
https://blueprints.launchpad.net/nova/+spec/advanced-scheduler
https://blueprints.launchpad.net/nova/+spec/distributed-scheduler
- How is the instance_metadata table used? I see the "cpu_arch, xpu_arch" and other fields we added as of the same class of data as vcpus, local_gb, or mem_mb fields, which is why I put them in the instances table. Virtualization type is of a similar class. I think of meta-data as less defined constraints passed to the scheduler like "near vol-12345678".
- Will your capabilities scheduler, constraint scheduler, and/or distributed schedulers understand different available hardware resources on compute nodes?
- Should there be an instance_types_metadata table for things like "cpu_arch" rather than our current approach?
As long as we can inject a "-t cg1.4xlarge" at one end and have that get routed to a compute node with GPU hardware on the other end, we're not tied to the centralized database implementation.
Thanks,
Brian
PS: I sent this to the mailing list a week ago and didn't get a reply, now can't even find this in the openstack list archive. Anyone else having their posts quietly rejected?
Follow ups