← Back to team overview

multi-touch-dev team mailing list archive

Re: Geis/Grail architecture

 

On Tue, 2010-11-23 at 11:42 +0100, Denis Dzyubenko wrote:
> 
> sure, my question was more like - after the recognizer has made a decision which
> gesture it is and after it has been delivered and accepted by the
> client - do we still need
> to send the following gesture update events through the whole stack
> which includes
> window manager?
> An alternative might be to make clients start recognizing gestures on
> the client side in
> that case.
> 
> i.e. the window manager will take care of contexts and priorities
> ("system gestures first"),
> and then as soon as the decision has been made, the client might take
> over and recognize
> gestures out of touch events by itself.

A distributed gesture recognizer is harder to get right and easier to
get wrong.  We briefly considered it, but quickly filed it on the "lets
get the simpler design working quickly and resort to that of we have to"
shelf.

> In utouch there is an extra library - geis - that hides implementation
> details, so it might be
> possible to change the way the recognition works later.

Yes, that was the point of geis.

> Speaking of wayland - are there any plans on improving the input layer
> of wayland? How mouse
> and touch and gestures input will be handled there?

Yes indeed, considering Wayland is a display layer and has nothing to do
with input.  We predict with Wayland the window manager will BE the
display server and input server.  Our current gesture recognition model
wins again.

-- 
Stephen M. Webb <stephen.webb@xxxxxxxxxxxxx>
Canonical Ltd.




References