← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

On 10/09/2010 01:07 PM, Peter Hutterer wrote:
[...]

>> I agree that the meaning of a gesture is context dependent, just like a typed
>> word means different things in different contexts. Gesture primitives, however,
>> are much more like the keys themselves.
> 
> what is a gesture primitive though? this is one of the points I have troubles
> with: at what point does a movement of a finger become a swipe (which I think is
> what you'd classify as primitive, right?). And isn't that context-dependent too?
> Or do I misunderstand something here.


What I mean by context-independent gesture primitives are signals one can deduce
by looking at what happens on the touch surface only. Two fingers closing in on
each other on the surface are doing that regardless of what it means in the UI
context. The two people operating the surface will not suddenly become one
person because the UI wants to see all touches available. Etcetera :-)

It can well be argued where to put things, but it is true that this portion of
the interface, figuring out features, or gesture primitives, is a computational
task that one would like to place centrally, for everyones benefit.

>>> For example, once an object is selected a number of gestures may become active
>>> that aren't otherwise. So on selecting an object, you need to teach the
>>> recogniser about these (or at least enable them), which puts you in a race
>>> condition because by the time the request has been forwarded to the recogniser,
>>> the gesture may already be over again.
>>
>>
>> To the gesture recognizer, all gesture primitives are always enabled; that is
>> the context-independent part of the setup. The gesture instantiator is what
>> would need to know about the changed gesture selection.
>>
>> I can see the benefit of moving gesture instantiation closer to the client apps.
>> At the same time, I have yet to understand how it does not ignore some features
>> of the current setup. Examples would be event propagation based on gesture type,
>> such as the recipient of global gestures or rotation with one finger outside the
>> window, and the multi-user problem, such as decomposing input into a set of
>> hands. I know you discussed a lot of things at XDS, so maybe I am missing
>> something.
> 
> This is one of the main points we discussed and Stephane Chatty got us (well, me
> anyway) on the right track here. It's what I described as each gesture has to be
> carefully selected to be a subset of the previous gestures to avoid collision
> and delays.


I agree with this, but that is a gesture composition problem, which is on a
different level.

> A global gesture is not a problem, you can implement this with grabs.


A global gesture can easily be implemented using a global engine, too ;-)

> A global

> three finger gesture (to take the example of Unity) is a problem, because you
> cannot forward one and two finger events until you're sure that the third finger
> isn't coming.


But you can! You may be doing scrolling and stuff first, then just add one
finger. Unless the gesture explicitly says you have to start from zero, there is
really nothing wrong with is, is there? And if the gesture states that you have
to start from zero, it implicitly means you cannot have drag in between, which
implicitly means you have to put all fingers down simultaneously, which means
within a short timeout, which means there is not a problem.

> This is essentially what I've tried to explain with the example of the two
> gesture sets. Not sure if that got across though, if not I can try to explain it
> in other words again.


Please do. The more examples and iterations, the better the end result. :-)

>>> Plus, the roundtrip issue of sending events to the recogniser who sends some of
>>> them back before you can send them to the clients.
>>
>>
>> Right, provided the recognition is placed on the client side.
> 
> wait, what? I'm not sure I read this comment correctly.


Hehe. Point being that no particular problem is being solved by moving
context-independent gesture recognition to the client side.

>>> Technical reasons include the difficulty of updating the protocol if you notice
>>> something was missing - much harder than updating a library API.
>>
>>
>> I can agree that protocols buried deeper into a system are harder to change, but
>> they also provide a greater sense of consensus. Not everything is bound to
>> change. ;-)
> 
> I don't think we have enough experience at this point to say how much
> multi-touch interfaces will change in the near future. we haven't really seen
> any yet, so I'm cautious.


I thought the comment was about gestures and propagation, for which several
frameworks apparently are being tested as we speak.

Cheers,
Henrik



Follow ups

References