← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

On Tue, 2010-10-05 at 11:13 -0700, Ping Cheng wrote:
> Hi Chase,
> 
> Thank you for the heads up. I will reply to your other thread later
> (lack of time to read and think). I need to comment on a paragraph of
> Peter's blog to avoid "popularity". Well, to make sure that we are all
> on the same page.
> 
> Peter said: "With the new approach the recognition happens purely
> client-side and is done by a library or a daemon. This allows for
> multiple different gesture recognisers be active at the same time, a
> scenario that is quite likely to happen (I do envision GTK, Qt,
> Mozilla, etc. all wanting their own system). Whether that's a good
> thing for the UI is another matter, consistency for gestures is
> important and especially Ping Cheng was not happy at the prospect of
> having multiple, possibly inconsistent, systems. We need to find some
> common ground here between desktop environments and toolkits."
> 
> First of all, I am very happy with all the discussions we had so far.
> It makes me feel that we are working on a solution/solutions and we
> are working together.
> 
> However, there are three cases  (well, maybe more. Let's see how many
> I am going to end up with :) in Peter's message above:
> 
> 1.  supporting multiple systems in design;
> 2.  tracking multipe systems/clients simultaneously;
> 3.  running multipe systems simultaneously for the same set of MT events.
> 
> From my reply to Chase's last email you can see that I agree with the
> first case wholeheartedly. We (well, the X server) should give clients
> and toolkits developers the freedom to convert MT events in their own
> ways. X server is a focal point to provide generic UI digestible MT
> events and a tracking agent for the events (grab/ungrab, etc). So,
> there would be no inconsistency from the X server's perspective in
> this case.
> 
> The second case, master/passive grabs, is fine too as long as we can
> manage them properly, assuming there is only one master at a time.
> 
> It is the third one that I do not think I have understood it properly.
> How can we run multipe systems simultaneously on a system? Which
> client is going to do what? There is only one current application at a
> time, isn't it? Even for applications that support multiple users
> (i.e., tons of fingers :) simultaneously, it is still one application
> drives all the events. So, the client either grabs all the events or
> none. It can use all or some of the events. But we don't let another
> client share the "spare" events simultaneously.

Back at XDS during one of the lunches Peter, Chris Halse-Rogers,
Kristian Høgsberg, and I worked through an idea that Keith Packard
proposed where XI 2.1 MT grabs would behave differently from traditional
XI and core grabs. Instead of sending events exclusively to the grabbing
client until the client decides to replay events to other clients, we
can send all events to all clients, but the non-grabbing clients receive
the event with a flag set that tells them, "This event is grabbed by
another client, don't do anything permanent with it." The non-grabbing
clients can do things in the background like gesture recognition
processing, or perhaps even an indication on screen (ex. highlighting a
cell in a list) of what might transpire if the grabbing client decides
to replay the event. If the events are replayed, the client can then
commit the action (ex. selecting the cell in the list). If the events
are not replayed, the clients will receive a notification from the
server, and the client will undo any hinting and stop any processing of
the events.

The reason this is useful is to lower the latency when there's
system-wide and application specific gesture recognition to be
performed. It requires a trust of the client applications to not handle
the events as though they owned them, but X trusts client applications
with lots of similar things today without issue.

> My suggestion was "Keep It Simple, Stupid". The events will be handled
> by one client at a time until it releases them (all touches are out).
> When a signle finger touches the surface, it is posted as a cursor
> movement (it doesn't matter whether this device only supports one
> finger or more). As soon as X server detects more than one touch, it
> stops sending cursor movement and waits for a certain time. While it
> is waiting, some kind of "we are here" that indicates what/where is
> being touched would be nice, as Peter pointed out somewhere in his
> blog, I think (I didn't read through the message. It is too long :).
> 
> During this time frame, clients and the Unity can react to the MT
> events. Whether the client or the Unity should get the events first
> can be discussed since we have a master/passive grab in the design. I
> think the Unity should be the default agent to process the events if
> no one cares.
> 
> The Unity (system-wide MT events translator) is platform specific too.
> It can be the Ubuntu way. Other OS vendors can also have their own
> design and implementation, or don't even support MT at all. So, we get
> two layers of diversities here, at the toolkit stage and during the
> system wide event translation.
> 
> Am I making myself clearer or confusing you guys even more?

It all seems perfectly clear to me :). I hope my comments above help
address your concerns.

Thanks,

-- Chase




Follow ups

References