← Back to team overview

openstack-volume team mailing list archive

Re: Basic volume-type aware scheduler

 

Hi,

I had one question about the proposal. From what I understand, it seems to assume that different types of storage backends (actual infrastructure which houses the volume) will be controlled by different types of volume drivers (or at least nodes). Is this a good assumption to make? 

Instead do you think scheduling on the basis of backends instead of nodes is a better idea. The reason I find this more logical is, in the case of compute, we schedule on the basis of a node, because the node is actually housing (running) the VM. For volume drivers, the node is simply the brain, while some real storage that the node can reach, is where the volume is housed. Each driver can report whatever opaque key/value pairs they wish to, and plug in rules, just as you mentioned, to help the scheduler make this decision.

In the absence of this, if we have symmetric volume drivers, that can reach most of the storage backends, we end up in a situation where the scheduler has not really shortlisted anything, and has left another round of scheduling to the driver. This is especially true if at some point we support multiple drivers per node, as you suggested.

Whatever I said can certainly be accommodated in Vladimir's current proposal by doing two rounds of scheduling (pick node, pick storage from node). What I am asking is, if nodes, in general, need the ability to figure out which backend to create this volume on, why not push it into the generic scheduler itself?

Thanks,
Renuka.

-----Original Message-----
From: openstack-volume-bounces+renuka.apte=citrix.com@xxxxxxxxxxxxxxxxxxx [mailto:openstack-volume-bounces+renuka.apte=citrix.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Armando Migliaccio
Sent: Wednesday, October 19, 2011 9:06 AM
To: Reddin, Tim (Cloud Services); Chuck Thier
Cc: openstack-volume@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Openstack-volume] Basic volume-type aware scheduler

I agree with Tim. 

As I understand it, Vlad is talking Policies and Chuck is talking mechanisms. Those are not to be confused, but rather to be kept separate. Chuck's proposal may turn things to be simpler, but definitely not more flexible. This is because system implementation details will dictate policies according to which decisions are made...unless I am completely missing the point.

Armando

> -----Original Message-----
> From: openstack-volume-
> bounces+armando.migliaccio=citrix.com@xxxxxxxxxxxxxxxxxxx 
> bounces+[mailto:openstack-
> volume-bounces+armando.migliaccio=citrix.com@xxxxxxxxxxxxxxxxxxx] On 
> volume-bounces+Behalf Of
> Reddin, Tim (Cloud Services)
> Sent: 19 October 2011 16:14
> To: Chuck Thier
> Cc: openstack-volume@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Openstack-volume] Basic volume-type aware scheduler
> 
> I think we need to differentiate between what we present to the 
> consumer, (which may be a simple set of flavors) and the attributes 
> which the scheduler using to make selection decisions.
> 
> If I am reading the proposal correctly a driver can advertize a ( 
> mostly) opaque set of attributes which the scheduler can use to make 
> placement decisions.
> 
> Doesn't this kind of approach facilitate the kind of placement 
> capabilites that Renuka was seeking? Without introducing any topology 
> or vendor specific awareness?
> 
> In addituon wont the maintenance of these kind of attributes 
> facilitate an admin api in a vendor agnostic way?
> 
> Tim
> 
> 
> On 19 Oct 2011, at 15:52, "Chuck Thier" <cthier@xxxxxxxxx> wrote:
> 
> > Hi Vladimir,
> >
> > I agree that we need a volume-type aware scheduler, and thanks for 
> > taking this on.  I had envisioned it a bit different though.  I was 
> > thinking that the cluster operator would define the volume types 
> > (similar to how they define vm flavors).  Each type would have a 
> > mapping to a driver, and the scheduler would use this mapping to 
> > determine which driver to send the incoming request to.
> >
> > I think this makes things simpler and more flexible.  For example an 
> > operator may not want to expose every capability of some storage 
> > system that they have added.
> >
> > --
> > Chuck
> >
> >
> > On Tue, Oct 18, 2011 at 2:36 PM, Vladimir Popovski 
> > <vladimir@xxxxxxxxxxxxxxxxx> wrote:
> >> Hi All,
> >>
> >>
> >>
> >> I’ve registered a new blueprint for volume-type aware scheduler:
> >>
> >> https://blueprints.launchpad.net/nova/+spec/volume-type-scheduler
> >>
> >>
> >>
> >> Please take a look at specification attached to it explaining main 
> >> principles (feel free to add/change parts of it).
> >>
> >>
> >>
> >> Regards,
> >>
> >> -Vladimir
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Mailing list: https://launchpad.net/~openstack-volume
> >> Post to     : openstack-volume@xxxxxxxxxxxxxxxxxxx
> >> Unsubscribe : https://launchpad.net/~openstack-volume
> >> More help   : https://help.launchpad.net/ListHelp
> >>
> >>
> >
> > --
> > Mailing list: https://launchpad.net/~openstack-volume
> > Post to     : openstack-volume@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~openstack-volume
> > More help   : https://help.launchpad.net/ListHelp
> --
> Mailing list: https://launchpad.net/~openstack-volume
> Post to     : openstack-volume@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack-volume
> More help   : https://help.launchpad.net/ListHelp
--
Mailing list: https://launchpad.net/~openstack-volume
Post to     : openstack-volume@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack-volume
More help   : https://help.launchpad.net/ListHelp

Follow ups

References