multi-touch-dev team mailing list archive
-
multi-touch-dev team
-
Mailing list archive
-
Message #00573
Re: Geis/Grail architecture
Hi,
On 22 November 2010 18:21, Henrik Rydberg <rydberg@xxxxxxxxxxx> wrote:
> It seems unavoidable to run the stream through a recognizer in order to
> determine if there are gestures. The window manager holds the needed context, so
> parts of it is needed. Depending on many additional thoughts and directions, it
> might be possible to reduce the architecture further. However, the system also
> needs to be functional from the start.
>
>> Do we need the latency introduced by sending all application gestures
>> through extra component - through the window manager?
>
>
> If the question is whether we want to enable gestures spanning more than one
> application/window, it seems the answer is yes. Given that, using a global
> gesture recognizer is efficient. Putting it in the window manager then makes a
> lot of sense.
sure, my question was more like - after the recognizer has made a decision which
gesture it is and after it has been delivered and accepted by the
client - do we still need
to send the following gesture update events through the whole stack
which includes
window manager?
An alternative might be to make clients start recognizing gestures on
the client side in
that case.
i.e. the window manager will take care of contexts and priorities
("system gestures first"),
and then as soon as the decision has been made, the client might take
over and recognize
gestures out of touch events by itself.
Of course the question is how important that is - if there is a real
problem with latency when
sending lots of gesture events through one intermediate layer.
In utouch there is an extra library - geis - that hides implementation
details, so it might be
possible to change the way the recognition works later.
Anyhow, as I mentioned before - I am quite happy with the architecture
that we currently have
and just exploring the possibilities to make it even better.
> To further elaborate on possibilities, it might be fruitful to consider what the
> differences are to the current (maverick) architecture. For instance, if there
> was a gesture protocol through X, we could implement the sought features without
> the wm. If there was no X, we could implement the sought features solely in the
> wm. If there was no wm, we could implement support directly in the application.
> In my view, there are many possibilities, but the one presented by the team
> keeps a balance between where we are (X and no gesture protocol), what we want
> (wm with global gesture support) and where we are going (wayland etc).
I agree with that, the current approach is the best one in general,
considering what we have.
Speaking of wayland - are there any plans on improving the input layer
of wayland? How mouse
and touch and gestures input will be handled there?
--
Best regards,
Denis.
Follow ups
References