← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

On 10/12/2010 06:53 PM, Chase Douglas wrote:

> On Tue, 2010-10-12 at 16:31 +0200, Henrik Rydberg wrote:
>> On 10/12/2010 03:49 PM, Chase Douglas wrote:
>>>> For this particular example, since the regions are listening to different
>>>> gestures, there is actually knowledge available on the server side as well, so
>>>> the combinatorial explosion does not seem to enter here.
>>>
>>> This is the real issue: the server does not actually know about these
>>> regions. In old school toolkits, each widget was a separate X window.
>>> Modern toolkits don't do this anymore; instead, they have one X window
>>> for the overall "window" and they draw all the widgets themselves. The
>>> server doesn't have any information about where any widgets are in
>>> windows anymore.
>>
>>
>> Alright, I see you point. So given that we want to rebuild the role of the
>> server on the client side, for good reason, it seems fair to ask why we want to
>> have window-based touch propagation like in the proposed XI2.1.
> 
> There must be a way to propagate input among various "windows" still.
> Different windows from different applications have differing toolkits
> and event propagation. The common interface between them all is XInput
> (or core X protocol). So the most straightforward path seems to be to
> extend XI for MT support.


The root of the problem is that we have one propagation model on the window
level and one propagation model on the application level, and now, via global
gestures, they interact in a less than satisfactory way.

>>> Going along with this, event propagation for widgets in a "window" is
>>> performed in the toolkit. When using modern toolkits, X event
>>> propagation is only used to propagate events between application
>>> "windows" on screen.
>>>
>>> (Here I use quotes around "window" to refer to what a normal user would
>>> call an application window, in contrast to the X definition of a
>>> window.)
>>>
>>> This leads us to the gesture recognition mechanism Peter and I have been
>>> discussing. Since we can't perform gesture recognition in the server, we
>>> have to potentially perform recognition twice: first at the system-wide
>>> level, then potentially at the application level if no system-wide
>>> gestures were recognized. One method for doing this is XI 2.1 passive
>>> grabbing.
>>
>>
>> It seems to me another solution would be to route the events directly to the
>> client side and use a more modern event propagation model.
> 
> This sounds like another client-server model, but with a different
> server now. I'm not sure how this solves the event propagation issues
> with client side toolkit rendering, and I only see it duplicating what
> could be done more naturally in the window server input layer.


There is no argument that client-side grail solves client-side event
propagation. The argument is whether supporting the linkage between the two
propagation models is a good idea.

I appreciate that the solution Peter and yourself have arrived at is navigating
delicately in a somewhat jungleish environment. However, duplicating the window
event propagation on the client side may very well be a simpler solution.
Implementing get_clients() would not be much more complicated, would it?
Stephen's ideas are starting to make sense to me. :-)

The problem I have with passive grabs is that it uses tentative events as a
solution to the round-trip problem involved in accepting MT events. In my mind,
tentative events should be used to show something different from normal events,
something that makes the user understand their volatile nature. In this respect,
inhibiting pointer motion during two-finger scroll is quite different from user
feedback during a rotation gesture.

I hope this does not sound too discouraging - I am simply trying to turn some
stones here. If you already turned them, I am sure you will tell me. :-)

Cheers,
Henrik




Follow ups

References