← Back to team overview

multi-touch-dev team mailing list archive

Re: A new release of xi 2.1 + utouch stack in xorg-unstable ppa

 

On 02/07/2011 09:23 AM, Mark Shuttleworth wrote:
> On 07/02/11 11:20, Henrik Rydberg wrote:
>> There is an important difference in the handling of prolonged gestures
>> in this environment, which seems to imply that not all basic
>> application gesture needs can be met with this approach.
>>
>> With the maverick X drivers (synaptics and multitouch), it is natural
>> to position the cursor with one finger, and then add another finger to
>> scroll a bit, life one finger to position the cursor again, and so
>> on. The same goes for zooming, where one first positions the cursor,
>> then zooms at that point.
>>
>> With the new natty XI2.1 packages, the gesture engine releases control
>> over a touch based on a timeout, passing it on to the application in
>> such a way that the modus operandi described above no longer is
>> possible. In practise, one might need to view the server-side
>> recognition as dedicated to system gestures, and instead reimplement
>> the current application behavior on the client side.
> 
> This is a function of the fact that you can do two very different things
> with your finger: move the pointer, or interact with content/gesture. I
> think we could special case that in pointer-based environments like the
> desktop, so that single-finger pointer moving actions come to an end
> when additional fingers are brought to bear, at which point a gesture is
> begun. Would that address the issue?

The devil is in the details here. The worry is that we can find
ourselves in inconsistent states and/or complicate the implementation to
the point that it's unworkable or prohibits other use cases.

I've been trying to come up with a scenario that demonstrates how this
approach would be cumbersome and difficult. The problem is that every
scenario I come up with can be "special cased" somehow. But I believe
that each special case will end up with it's own issues.

As an example, lets say you special case two finger drag as scroll as
you describe. You have an application that wants multitouch events. It
also wants to be able to scroll. The use case desired is that an
initiated two finger drag (you must go from 0 to 2 fingers dragging
within a small time interval) scrolls an object. However, other touch
motions like going from one finger to two to three are to be interpreted
differently. Perhaps it's a multitouch drawing application. If you
assume that any two finger drag is always a scroll, then it may become
impossible to interact with this application with two fingers. However,
the application would work as desired when using the utouch stack as
I've got in the ppa. To be clear, although this may seem a contrived
example, I believe we must cater to applications that want full MT
capabilities with any specific number of simultaneous touches while
still being able to handle obvious and deliberate gesture input.

Essentially, there will always be holes one can poke in MT and gestures.
I feel we should aim to be as simple as possible, while being mindful of
any glaringly bad side effects. I don't see inhibiting position then
scroll then position as a terrible side effect that is worth special
casing and making other things potentially more difficult.

Thanks,

-- Chase



Follow ups

References