multi-touch-dev team mailing list archive
-
multi-touch-dev team
-
Mailing list archive
-
Message #00712
Re: A new release of xi 2.1 + utouch stack in xorg-unstable ppa
On 02/07/2011 11:00 AM, Henrik Rydberg wrote:
>>> This is a function of the fact that you can do two very different things
>>> with your finger: move the pointer, or interact with content/gesture. I
>>> think we could special case that in pointer-based environments like the
>>> desktop, so that single-finger pointer moving actions come to an end
>>> when additional fingers are brought to bear, at which point a gesture is
>>> begun. Would that address the issue?
>>
>> The devil is in the details here. The worry is that we can find
>> ourselves in inconsistent states and/or complicate the implementation to
>> the point that it's unworkable or prohibits other use cases.
>
> I don't think we need to over-dramatize this particular case. Rather,
> acknowledging that a hovering pen, a finger on a trackpad, and a
> hovering finger on a screen all constitute a real situation not
> properly accounted for, should make it possible to resolve this issue
> cleanly.
I agree that xi 2.1 doesn't handle this cleanly. The major reason for
this is the interaction between touch and cursors, and gestures and X.
We are trying to fit touch into a system that is poorly designed for it,
and we are trying to fit gestures either before the X input stack or
after it. No implementation is going to be perfect given these constraints.
We can strive for a better system in wayland, but some compromises must
be made for X.
>> I've been trying to come up with a scenario that demonstrates how this
>> approach would be cumbersome and difficult. The problem is that every
>> scenario I come up with can be "special cased" somehow. But I believe
>> that each special case will end up with it's own issues.
>>
>> As an example, lets say you special case two finger drag as scroll as
>> you describe. You have an application that wants multitouch events. It
>> also wants to be able to scroll. The use case desired is that an
>> initiated two finger drag (you must go from 0 to 2 fingers dragging
>> within a small time interval) scrolls an object.
>
> The desired usecase is that at any moment there are two fingers on the
> pad, scrolling is performed. No timeouts or anything like that.
The point is that this is only one use case. We have to cater for all
(or most) use cases simultaneously.
I also disagree that this use case is universal. I believe there are
times where you have a scrollable object, but you want to be able to
perform a multitouch operation or other gesture on it. Say you've got a
canvas in gimp and you want to be able to rotate it at 90 degree
intervals as well as scroll it. You don't want to allow rotation and
scrolling simultaneously. So, you have to lock on to one or the other.
If you always assume scrolling, you'll never be able to rotate.
>> However, other touch
>> motions like going from one finger to two to three are to be interpreted
>> differently.
>
> One to two should be the same as zero to two in this case, i.e., there
> should be no account of history. Moving to three is not part of this
> particular usecase.
But it may be a use case for a different application. We should not
break one use case to insert another without fully weighing the impact
of both.
>> Perhaps it's a multitouch drawing application. If you
>> assume that any two finger drag is always a scroll, then it may become
>> impossible to interact with this application with two fingers. However,
>> the application would work as desired when using the utouch stack as
>> I've got in the ppa. To be clear, although this may seem a contrived
>> example, I believe we must cater to applications that want full MT
>> capabilities with any specific number of simultaneous touches while
>> still being able to handle obvious and deliberate gesture input.
>
> The example is much simpler than this - an application that does not
> care about MT events but just wants scrolling to work as before.
This is an argument considering only the one scrolling use case again.
We can't be single minded about use cases.
>> Essentially, there will always be holes one can poke in MT and gestures.
>> I feel we should aim to be as simple as possible, while being mindful of
>> any glaringly bad side effects. I don't see inhibiting position then
>> scroll then position as a terrible side effect that is worth special
>> casing and making other things potentially more difficult.
>
> Please consider the possibility that it _is_ a glaringly bad side
> effect.
I will. In fact, I would be interested to hear how many people use their
trackpads like this today, how many don't, and how many never even
thought of it before. As you pointed out elsewhere in the thread, this
isn't supported only by OS X, but also by synaptics in X. However, I
personally have never used a trackpad in this way, nor have I even
thought about it (which is probably clear by now :). The same is the
case for my wife, who happens to be sitting next to me.
So, I'd be interested in hearing how others use their trackpads in this
regard. Maybe I'm wrong, and we do need to rethink this. Or maybe the
usage is more like the usage of fluxbox: there are those who really want
it, but almost nobody uses it when looking at it statistically.
> We are talking about the special case of a trackpad moving the input
> focus with a finger. If we were to consider this the same as hovering,
> for example, we would automatically draw these conclusions:
>
> 1. It is not a touch, it is merely moving the input focus.
>
> 2. Since it is not a touch, it is not an MT touch either.
>
> In the hovering context, adding an additional N fingers would mean N +
> 1 touches appear at the same time, which is what has been suggested as
> a solution. There is most likely no need to be as drastic as not
> presenting the first finger as a touch, but the notion should be
> clear.
Are you suggesting that we don't post touch events from a trackpad until
at least two touches are active? If so, an application would need to
watch cursor motion and convert it to touch motion internally if another
touch point begins. That does not seem like a good interface to me.
In your last sentence you seem to imply there may be a way around this
without presenting the first touch as only cursor motion. I don't
believe there's a reasonable way to do this, so I challenge you to come
up with a method :). There are intricacies with where pointer events are
sent vs where touch events are sent that make everything in this area
difficult. To move forward, we need complete ideas that address how
events are generated and delivered for cursor motion and touch events,
including where events are sent when the cursor moves from one window to
another.
> As a general remedy, using a sort of enter/leave notion for touches
> was suggested during the development of the XI2.1 spec. What was the
> major objection to that idea?
I don't remember this. Can you refresh my memory?
Thanks,
-- Chase
Follow ups
References