elementary-dev-community team mailing list archive
Mailing list archive
Fwd: RFC: gesture management model
Forwarding this into the dev community. I haven't read through this (nor am
I in a position to contribute code to elementary for the next several
months), but this seems to be something we should definitely look into for
the L+1 cycle.
---------- Forwarded message ----------
From: Carlos Garnacho <carlosg@xxxxxxxxx>
Date: Sat, Aug 10, 2013 at 6:18 PM
Subject: RFC: gesture management model
To: gtk-devel-list <gtk-devel-list@xxxxxxxxx>
On the gestures GTK+ branch there is a collection of event controllers
that interpret events in order to trigger actions. This branch never
really got into how do gestures get handled throughout a hierarchy, as
Matthias pointed out on preliminary review.
So I've been thinking in how gesture management happen as a toolkit
feature, and I'm now looking into putting this into practice, but would
be great to get comments soon in the loop. It is worth mentioning that
the gestures branch already goes in the proposed direction, even though
it currently focuses on plain event handling currently.
In the blurb below I talk much about "touch sequences", but controllers
could pretty much manage any kind of event, so this pretty much applies
to any user interaction.
The gestures branch defines basic event handling, in order to have these
controllers work throughout a widget hierarchy, I've tried to define
further how should these behave:
* These handle GdkEvents and GtkWidget::grab-notify.
* Each subclass tries to recognize a concrete action.
* The number of touch sequences a controller handles doesn't
* A touch sequence can be declined at any point, for external and
* Controllers must acknowledge/decline a touch sequence ASAP,
usually based on timeouts or thresholds.
* Controllers keep track of touch sequences during all their
lifetime, regardless of the point above. This acts as an
"implicit grab" on a gesture, making any later touch sequence
ineffective till the/a monitored one disappears.
Note that this pursue for simplicity in event handling means that
multiple controllers might be interested in the same events, even within
the same widget. With that defined behavior, a basic set of gestures
could look like:
* Tap/Click: For simple clicks.
* Long press: For long, mostly stationary presses.
* Drag: Handles drags, reports dx/dy from initial coordinates.
* Swipe: Reports direction/velocity when a sequence finishes.
* Zoom/Pinch: Handles 2 touch sequences, reports distance changes
* Rotate: Handles 2 touch sequences, reports angle changes
If we go for this model where separate touch sequences can be accepted
or declined at different controllers, and controllers report early
enough whether the handled action is being triggered, the possible
overall states a controller goes through would be:
Idle > Capturing > Idle
Idle > Capturing > Declined > Idle
Idle > [ Capturing > Acknowledged ]+ > Idle
Handling through the hierarchy
By the way event controllers make use of events, I think it is best for
those to take events from the GtkWidget::captured-event handler, so
events are effectively handled from the topmost to the deepmost widget,
and the implicit grab ensures the widget stack receiving events from a
touch sequence is static (until active grabs come at least, but all
sequences should be declined in that case anyway).
Independently to the way events are delivered, a same event could be fed
on multiple event controllers throughout the implicit grab widget stack,
and any of those can enter the "Acknowledged" state anytime.
However, what "Acknowledged" on a controller means is widget dependent,
as is also how multiple controllers work together within the same
widget. So any model to reclaim users actions for a single purpose must
happen at the widget level.
On such model, there should be at least a way to make a widget ACK on a
touch sequence, and a signal to have other widgets in the implicit grab
widget stack decline that same sequence. This way, a widget in the stack
may claim control while the others back out. So essentially:
void (* ownership_claimed) (GtkWidget *widget,
void gtk_widget_claim_ownership (GtkWidget *widget,
>From the usecases below, it is hard to find a fully comprehensive
high-level behavior, specially as for how do intra-widget controllers
get to cooperate together, I think a good default for the most common
cases would be:
* A widget can be set 0..n controllers (Usually 1 or 2 most
* All controllers get fed all events by default
* When a gesture enters in Acknowledged state, all touch sequences
handled there are claimed for the widget.
* A touch sequence being claimed elsewhere makes all other widgets
decline interaction with it.
A lower level interface should still be present so widgets may implement
different usage patterns or concatenate series of gestures. This would
consist mostly of gtk_event_controller_handle_event() as is implemented
Note that this notification mechanism is somewhat out of band with
device grabs, when the implicit grab is broken in any way, controllers
should decline interaction on that device altogether, so grabs cancel
all controllers listening on sequences from that device.
In this set of usecases I've tried to synthesize the intricacies of
gesture management: intra-widget behavior, handling through a hierarchy,
and handling through slightly differing hierarchies, I think other more
complex usecases may be considered combinations of those.
Some of these usecases are currently implemented in GTK+, although
differently, so the described way of working based on the proposal above
is still fictional.
Usecase 1. The scrolled window
Makes use of 2 controllers:
* A drag controller, in order to make content follow the
* A swipe controller, to initiate smooth scrolling at a
User events are fed on both controllers, it's the same
touchpoint that triggers both, and each performs its own
Usecase 2. Long presses within a scrolled window
One example may be touch text selection in GTK+, after a long
press with little movement, scrolling backs off and text
selection takes place.
In this situation we would have a scrolled window like in
Usecase 1, and a textview with a long press controller, if a
touch happens, all 3 controllers would attempt to recognize the
sequence at once, so that:
* If the user moves past a threshold timely: the scrolled
window claims the touch sequence, the long press in the
textview declines it. Scrolling happens
* If long press is triggered before moving too far: The
textview claims the sequence, the 2 controllers on the
scrolled window back out. Text selection happens
Usecase 3. Drags+swipes at multiple levels
As seen in any phone app doing lists+side panes, the swipe
left/right actions on the contact list in Android, etc...
Here we've got a scrolled window like in Usecase 1, and a widget
with at least a drag controller, and maybe a swipe controller if
inertia-like behavior is desired. All those controllers would be
consuming the same events, and each would have tweaked behavior
to acknowledge the sequence based on directionality.
* If the user moves predominantly vertically, the scrolled
window claims the sequence, the other widget backs off
so further events have no effect there.
* If the movement is mostly horizontal, the scrolled
window backs off and any scrolling stops.
Usecase 4. Two scrolled windows side by side
Hardcore multitouch should indeed allow for multiple widgets to
be operated at the same time, and that indeed happens now in GTK
+ if widgets set the GDK_TOUCH_MASK, With this proposal nothing
would change for this situation, each widget would operate on
different touch sequences, so these won't overlap in the first
There's the question as to what happens if any of the two
different widget stacks trigger a device/gtk+ grab, grabs at a
device level must trigger declines on every touch sequence from
that device, so gestures are essentially cancelled, it is still
a first come, fist serve situation though.
Usecase 5. Two scrolled windows in a multitouch GtkPaned
Late iterations of the multitouch GTK+ branch had this 2-finger
gesture on GtkPaned to resize it when you quasi-simultaneously
put one finger on each side of the separator, I figured how
would this work with this model. This usecase is maybe a bit
more funky, as there's no stock controller that immediately
implements this behavior.
Given a controller like this were implemented, there's two
* One touch sequence comes and triggers scrolling (past a
threshold) before a second one happens on the other
The first sequence is claimed by the scrolled window, so
it is declined on the 2-finger gesture. The second touch
alone wouldn't trigger the 2-finger gesture, and it
could be eventually acknowledged by the scrolledwindow
in the second pane.
Also, as event controllers do still keep minimal track
of declined event sequences, the first touch would keep
a "slot" busy, so no further extra fingers may trigger
* If both touch sequences arrive timely, and didn't move
past the thresholds that make scrolledwindows claim
their respective sequences:
Both touch sequences are claimed by the 2-finger
gesture, which makes both scrolled windows decline their
respective sequences, as each touchpoint has separate
implicit grab widget stacks.
Usecase 6. Pinch/Zoom/Drag/whatnot
Or what Ephy,Evince and EOG could make use of.
This usecase honestly fits the least to the proposed high-level
model. Specially if you want to smoothly change between modes of
operation as touchpoints come and go, having drag possibly flip
to the other remaining touchpoint, etc...
On every previous usecase, having a sequence claimed for a
controller in a widget meant it was declined by every other
widget, for this usecase dragging and 2-finger gestures should
be handled within the same widget, so events can be just routed
and we don't get previous sequences blocking controllers. This
is the most obvious candidate for the lower level event
In practical terms, WebKitWebView would need as much as that,
plus pre-interpretation of GdkEventTouch in order to produce DOM
TouchEvents, there's no sane way to wrap a model around that
Regardless of intra-widget event delivery funkiness, having
external widgets claiming a touch sequence would make all
controllers decline the sequence as usual, so externally to the
widget the behavior is consistent.
So... Thanks for bearing with me this long. I think this model is simple
to grasp enough, offers a sane default behavior and covers well most
situations, besides those that are already complex anyway and might need
some extra glue code. Many pieces are already there, so something
similar could be done in time for 3.12.
Constructive criticism is welcome, it'd also be great to know of
usecases that wouldn't fit this model well.
gtk-devel-list mailing list