← Back to team overview

multi-touch-dev team mailing list archive

Re: Fwd: GEIS 2.0 specification

 

 On 10/08/2010 06:49 PM, ext Stephen M. Webb wrote:
On Fri, 2010-10-08 at 09:40 -0600, Duncan McGreggor wrote:
I believe Stephen already has some changes he wants to make to this
proposed spec, so expect updates :-)
I'm thinking of radical changes, in fact.  This came out of the XDS
decision to move gesture recognition into the client side.

Before I go to far, I want to give a brief description of what I think
the 2.0 API should be like to see if this jives with everyone.

Geis 2.0 will have two ends:  an input end and a recognition end.

The application or toolkit will receive input events from, well,
wherever.  XI2.1 in the case of X11, or native inputs in the case of a
framebuffer-based installation (geis should not depend in any way on X).
To that end, the application or toolkit will need to get an input cooker
from geis, one per input device I imagine, and feed the raw input events
into the cooker in its event loop.  The cooker transforms the raw input
events into internal geis input events, and may require further inputs
such as coordinate mappings depending on the requirements.

That sounds optimal to me. It is exactly the way we would like to have it,
since we are also targeting systems without X.

The application or toolkit will then have to poll the geis event loop.
This drives outstanding cooked input events through the recognizer and
results in zero or more events being output.  The possible events
include (1) preview events -- the so-called tentative events, (2)
gesture events, just like in geis 1.0, (3) touch events, for touches
that are not recognized as part of a gesture, and (4) user-defined
events, discussed below.  The application or toolkit can do what it
likes with these events.

I'm not quite sure how exactly you imagine "polling".
I really liked the current approach with the file descriptor and the select statement.

I would create a default recognizer built on utouch-grail, but the next
step (geis 2.1?) would be to provide a programmable recognizer, somewhat
akin to the shaders available in 3D graphics rendering pipelines.
Applications can load programmable gesture recognizers dynamically, and
can have different recognizers for different screen regions or input
devices.  The programmable recognizers can emit the user-defined events
mentioned above.

There are still some details I need to work out before I can formalize
this more, but I would value some feedback on whether this is sane or
not.






Follow ups

References