← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

On 9/10/10 20:31 , Henrik Rydberg wrote:
On 10/09/2010 08:04 AM, Peter Hutterer wrote:
[...]

Thanks for this comprised explanation. :-) It would be great to get a similar
explanation to the touch interface, and what one gains by moving the gesture
recognition to the client side. ;-)

in terms of the touch interface - will do asap but I need to get through my
review queue first.

for moving the gestures to the client side:
gestures are a very context-specific thing and in that article that started this
thread, the paragraphs under "The lack of context" sum it up best: as an
outsider you cannot know what is a gesture. So you will end up interpreting some
things as a gesture that shouldn't, and not interpreting others that should be a
gesture.


I agree that the meaning of a gesture is context dependent, just like a typed
word means different things in different contexts. Gesture primitives, however,
are much more like the keys themselves.

what is a gesture primitive though? this is one of the points I have troubles with: at what point does a movement of a finger become a swipe (which I think is what you'd classify as primitive, right?). And isn't that context-dependent too?
Or do I misunderstand something here.


For example, once an object is selected a number of gestures may become active
that aren't otherwise. So on selecting an object, you need to teach the
recogniser about these (or at least enable them), which puts you in a race
condition because by the time the request has been forwarded to the recogniser,
the gesture may already be over again.


To the gesture recognizer, all gesture primitives are always enabled; that is
the context-independent part of the setup. The gesture instantiator is what
would need to know about the changed gesture selection.

I can see the benefit of moving gesture instantiation closer to the client apps.
At the same time, I have yet to understand how it does not ignore some features
of the current setup. Examples would be event propagation based on gesture type,
such as the recipient of global gestures or rotation with one finger outside the
window, and the multi-user problem, such as decomposing input into a set of
hands. I know you discussed a lot of things at XDS, so maybe I am missing
something.

This is one of the main points we discussed and Stephane Chatty got us (well, me anyway) on the right track here. It's what I described as each gesture has to be carefully selected to be a subset of the previous gestures to avoid collision and delays.

A global gesture is not a problem, you can implement this with grabs. A global three finger gesture (to take the example of Unity) is a problem, because you cannot forward one and two finger events until you're sure that the third finger isn't coming. This is essentially what I've tried to explain with the example of the two gesture sets. Not sure if that got across though, if not I can try to explain it in other words again.

Plus, the roundtrip issue of sending events to the recogniser who sends some of
them back before you can send them to the clients.


Right, provided the recognition is placed on the client side.

wait, what? I'm not sure I read this comment correctly.

Technical reasons include the difficulty of updating the protocol if you notice
something was missing - much harder than updating a library API.


I can agree that protocols buried deeper into a system are harder to change, but
they also provide a greater sense of consensus. Not everything is bound to
change. ;-)

I don't think we have enough experience at this point to say how much multi-touch interfaces will change in the near future. we haven't really seen any yet, so I'm cautious.

Cheers,
  Peter



Follow ups

References