← Back to team overview

multi-touch-dev team mailing list archive

Re: Geis/Grail architecture

 

Hi Denis,

> There are a few concerns been raised about the approach - for example
> Bradley Hughes made a good point that his main use case for gestures
> is application gestures (mac that is) - scrolling with two fingers on
> the touch pad, occasionally pinching, and only seldom making system
> gestures.


There will of course be different use cases. The unity gesture model is based on
the ability to perform gestures spanning more than one application/window.

> With that in mind we've thought about the architecture and it seems to
> be fine in general, even though we might have some latency because of
> application gestures triggering on the window manager side and then
> being delivered to the client (through dbus or any other rpc
> mechanism).


This has been a concern of ours as well, but it seems latency does not have to
be big at all, if using point-to-point connections.

> Basically three processes will wake up for every single move of
> fingers - the X.org which sends the touch events to client, the window
> manager that receives touch events due to a passive grab on the device
> and the client that receives the gesture event from the window
> manager.


In addition, something will happen as a result of the actions, which most likely
will spawn work as well.


> Is there anything we can optimize here? Is it necessary to wake up a
> window manager after we realize that this is an application gesture?


It seems unavoidable to run the stream through a recognizer in order to
determine if there are gestures. The window manager holds the needed context, so
parts of it is needed. Depending on many additional thoughts and directions, it
might be possible to reduce the architecture further. However, the system also
needs to be functional from the start.

> Do we need the latency introduced by sending all application gestures
> through extra component - through the window manager?


If the question is whether we want to enable gestures spanning more than one
application/window, it seems the answer is yes. Given that, using a global
gesture recognizer is efficient. Putting it in the window manager then makes a
lot of sense.

To further elaborate on possibilities, it might be fruitful to consider what the
differences are to the current (maverick) architecture. For instance, if there
was a gesture protocol through X, we could implement the sought features without
the wm. If there was no X, we could implement the sought features solely in the
wm. If there was no wm, we could implement support directly in the application.
In my view, there are many possibilities, but the one presented by the team
keeps a balance between where we are (X and no gesture protocol), what we want
(wm with global gesture support) and where we are going (wayland etc).

Henrik



Follow ups

References