← Back to team overview

openstack team mailing list archive

Re: Queue Service, next steps

 

Hi Todd,

That's the multicast example, for a normal 1-1 queue, look at the
first example:

Worker: POST /account/queue?wait=60&hide=60&detail=all
  (long-polling worker, request blocks until a message is ready)
Client: PUT /account/queue
  (message inserted, returns unique id that was created)
Worker: Return from blocking POST request with new message and process
  it. The message is returned to only one worker when the hide option
  is given, as this is an atomic operation.
Worker: DELETE /account/queue/id

You could have more workers waiting with the same POST request, and the
server will return the message to only one of them. There are options
we can look at later in regards worker distribution, affinity, etc.

I currently have it using POST instead of GET since it is really a
modification operation, not a read-only operation.

-Eric

> How do you manage the case where you want many workers to service a
> queue, and use long-polling to wait for a message, but you don't want
> the message multicast to every worker?  It looks like from the wiki
> examples:
> 
> Worker1: GET /account/queue?wait=60
> Worker2: GET /account/queue?wait=60
> Client: Put /account/queue/id1?ttl=60
> Worker1: Return from blocking GET request with message id1
> Worker2: Return from blocking GET request with message id1
> Worker1: GET /account/queue?wait=60&last=id1
> Worker2: GET /account/queue?wait=60&last=id1
> 
> i'd like to have worker1 receive message with id1, and worker2 receive
> the next message (id2), even though they are both long polling when
> id1 comes in.  Will this be supported, or does long polling on the
> same queue always yield multicast?
> 
> -todd[1]



References