openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #02264
Re: Extending Openstack's scheduling capabilities
That is great news!
I'll be on the look out for the distributed scheduler code :o).
Thank you,
Luis Miguel Silva
On May 6, 2011, at 9:10 AM, Ed Leafe <ed.leafe@xxxxxxxxxxxxx> wrote:
> On May 6, 2011, at 10:07 AM, Luis Miguel Silva wrote:
>
>> We have 3 types of actions that we need to integrate:
>> - handle new instance requests (i think we can do this one by
>> overwriting the schedule() function just as Sandi suggested!)
>> - querying environmental information (which i also think it is
>> possible, according to the information i read [since the scheduler
>> component should have access to a database that has updated status
>> information on all the compute nodes])
>> - dynamically changing the environment on demand (not during new
>> instances, but simply because something changed and an action must be
>> taken).
>
> You have described the exact 3 pieces that we have found to be common in every single scheduler implementation that has been requested/proposed:
>
> 1) Determining the state of various potential hosts for a new instance
> 2) Eliminating hosts that don't meet basic requirements for the new instance
> 3) Weighting the qualified hosts to select the best candidate
>
> This is the basis for the architecture of the distributed scheduler. Each of these processes will be able to be customized in the distributed scheduler code that will be proposed for merging into trunk in a few weeks. There are several implementations for handling each of these parts of the selection process that were shown at the Design Summit last week; expect more to follow!
>
>
> -- Ed Leafe
>
>
>
> Confidentiality Notice: This e-mail message (including any attached or
> embedded documents) is intended for the exclusive and confidential use of the
> individual or entity to which this message is addressed, and unless otherwise
> expressly indicated, is confidential and privileged information of Rackspace.
> Any dissemination, distribution or copying of the enclosed material is prohibited.
> If you receive this transmission in error, please notify us immediately by e-mail
> at abuse@xxxxxxxxxxxxx, and delete the original message.
> Your cooperation is appreciated.
>
References