multi-touch-dev team mailing list archive
-
multi-touch-dev team
-
Mailing list archive
-
Message #00574
Re: Geis/Grail architecture
>
> sure, my question was more like - after the recognizer has made a decision which
> gesture it is and after it has been delivered and accepted by the
> client - do we still need
> to send the following gesture update events through the whole stack
> which includes
> window manager?
There seems to be an assumption here that gestures are decided at a single point
in time. Rather, it seems likely that the gesture set is determined at every
frame, which leaves no real room for optimizations like the one you describe.
> Anyhow, as I mentioned before - I am quite happy with the architecture
> that we currently have
> and just exploring the possibilities to make it even better.
Understood.
>> To further elaborate on possibilities, it might be fruitful to consider what the
>> differences are to the current (maverick) architecture. For instance, if there
>> was a gesture protocol through X, we could implement the sought features without
>> the wm. If there was no X, we could implement the sought features solely in the
>> wm. If there was no wm, we could implement support directly in the application.
>> In my view, there are many possibilities, but the one presented by the team
>> keeps a balance between where we are (X and no gesture protocol), what we want
>> (wm with global gesture support) and where we are going (wayland etc).
>
> I agree with that, the current approach is the best one in general,
> considering what we have.
Sounds good. Also, we will know a lot more once we have this new architecture up
and running as a prototype.
> Speaking of wayland - are there any plans on improving the input layer
> of wayland? How mouse
> and touch and gestures input will be handled there?
Big question, no answer. :-)
Thanks,
Henrik
Follow ups
References