← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

On Thu, 2010-10-14 at 08:00 +0200, Florian Echtler wrote:
> On Wed, 2010-10-13 at 12:03 -0400, Chase Douglas wrote:
> > On Wed, 2010-10-13 at 17:16 +0200, Henrik Rydberg wrote:
> > > On 10/13/2010 04:14 PM, Chase Douglas wrote:
> > > [...]
> > > This is under the assumption that we don't add regions to geis, or any other way
> > > of obtaining windowing information from the toolkits.
> > 
> > Correct. I'll try to answer the unstated question of whether we should
> > do this with a single geis daemon instance.
> > 
> > Imagine you've got a window with many widgets. One of them is a
> > scrollable region, so it subscribes to gestures and provides the region
> > in screen coordinates.
> > 
> > Now the window is moved on screen. Should the toolkit have to update
> > geis with the new location? If so, it sounds racy. The WM could infer
> > the new position. However, what happens when the window is resized? The
> > new area of the scrollable widget is toolkit and application specific,
> > so the toolkit will need to inform geis of the new region. Again, I see
> > things getting racy during a resize operation. It also adds a bunch of
> > round trip messages and potential for UI delay. There may also be
> > contention for the geis daemon between clients.
> >
> > In short, I think it could be done this way, but I don't think it's
> > optimal.
> This problem also occured in libTISCH, as the whole "sensitive regions
> in client-server architecture" thing was unavoidable. My solution was 
> that gestures can be flagged as "sticky", meaning "this gesture will
> transform its region". As a result, the gesture and associated touches
> will "stick" to the region, even if they leave the original boundaries.
> 
> Of course, this approach also has some race conditions, but at least
> it's quite fast.

This is actually the default behavior of XInput, called implicit passive
grabbing. As soon as a finger touches the screen over a window, the
clients receiving events will continue to receive the events even if the
touch moves outside the boundaries of the window.

My original thoughts were on the raciness of a client determining the
window hierarchy at any given moment in time. I'll follow up more in my
reply to Henrik.

> Some more thoughts:
> > There's a few context questions we need to ask first (I'm trying to get
> > at what your asking, maybe I'm going off in the wrong direction though):
> > 
> > 1. Is a three finger pinch a system-wide gesture? I'll assume yes. Are
> > two finger gestures system-wide gestures? I'll assume no.
> I think decisions like this shouldn't be baked into the architecture. It
> sounds reasonable, but it's still an UI issue - maybe you'll want two
> fingers for global gestures at some point and only single-finger local
> gestures?

This decision isn't baked in to how X works, I just made assumptions
about the UI design to generate an event sequence. Another system may
have different system-wide gesture definitions and handle things
differently.

> > 2. Should a two-to-three finger movement that is not immediate, by which
> > I mean this isn't a three finger movement from the start where the first
> > couple of device touch frames caught only two fingers, trigger a three
> > finger gesture? I don't believe we've explicitly answered this anywhere
> > yet, but I think not. A three finger gesture must be initiated with
> > three fingers within a short period of time on the order of 10s of
> > milliseconds (I'm leaving aside the case of gesture continuation).
> I agree, this would roughly correspond to the "sticky" behaviour from above
> where some touchpoints are irrevocably paired with a
> region/window/whatever.
> 
> > 1. Two fingers touch down in scrollable region
> > 2. MT data passed to WM through XI 2.1 passive grab
> > 3. MT data also passed to app toolkit, but with "not-for-you-yet" flag
> > 4. WM and toolkit pass data to their own instances of geis/grail
> > 5. WM grail times out waiting for three fingers
> > 6. WM gets MT data back, realizes there's no gesture, "replays" touches
> > 7. In parallel, app toolkit gets two finger drag gesture from geis/grail
> > 8. App toolkit gets event saying MT events "are for you now"
> > 9. App toolkit starts scrolling
> > 10. Third finger goes down in another window
> > 11. Steps 2-6 and 8 repeated for this touch
> > 12. App toolkit for this other window handles touch as appropriate
> > 
> > Note that after the WM replays any touches, no more events generated by
> > those touches are sent to it. It will only get events from new touches
> > to the screen.
> If the WM does some kind of replay, wouldn't step 3 be superficial?
> 
> > Also, during the "not-for-you-yet" phase, applications can provide UI
> > feedback or do anything else with the data as long as they appropriately
> > handle both cases where the touches end up replayed or consumed by the
> > grabbing client.
> Ok, I see :-) But wouldn't on the other hand the replay be unnecessary then,
> and the specific geis instance could just cache the data until it gets the
> OK from the WM?

There's two geis instances in the example, one for the WM and one for
the client. In Ubuntu 10.10, we have one gesture recognizer embedded in
the X server, but we're moving that to the client side as libraries in
the future. Thus, there's no communication between the geis of the WM
and the geis of the applications.

The logical next question is, would communication between them help? I
think the answer is no. The geis of the WM doesn't have the context to
know of gesture regions in the applications, so it can't properly
perform gesture recognition. We get back to the issues noted above with
trying to have a single gesture recognition daemon.

Thus, we need the WM to replay the events so that the application can
begin handling gestures or the raw multitouch events.

Thanks,

-- Chase




References