← Back to team overview

multi-touch-dev team mailing list archive

Re: Fwd: GEIS 2.0 specification

 

On 10/08/2010 06:49 PM, Stephen M. Webb wrote:
[...]

> Before I go to far, I want to give a brief description of what I think
> the 2.0 API should be like to see if this jives with everyone.
> 
> Geis 2.0 will have two ends:  an input end and a recognition end.
> 
> The application or toolkit will receive input events from, well,
> wherever.  XI2.1 in the case of X11, or native inputs in the case of a
> framebuffer-based installation (geis should not depend in any way on X).
> To that end, the application or toolkit will need to get an input cooker
> from geis, one per input device I imagine, and feed the raw input events
> into the cooker in its event loop.  The cooker transforms the raw input
> events into internal geis input events, and may require further inputs
> such as coordinate mappings depending on the requirements.

> 
> The application or toolkit will then have to poll the geis event loop.
> This drives outstanding cooked input events through the recognizer and
> results in zero or more events being output.  The possible events
> include (1) preview events -- the so-called tentative events, (2)
> gesture events, just like in geis 1.0, (3) touch events, for touches
> that are not recognized as part of a gesture, and (4) user-defined
> events, discussed below.  The application or toolkit can do what it
> likes with these events.


Sounds like you just reinvented XI2.1 and grail on top of it :-) The event part
of the above may fit nicely with the idea of moving things over to the client
side - provided all details can be resolved without global event propagation.

> I would create a default recognizer built on utouch-grail, but the next
> step (geis 2.1?) would be to provide a programmable recognizer, somewhat
> akin to the shaders available in 3D graphics rendering pipelines.

> Applications can load programmable gesture recognizers dynamically, and
> can have different recognizers for different screen regions or input
> devices.  The programmable recognizers can emit the user-defined events
> mentioned above.


I think the idea of multiple recognizers avoids the fact that every single
multitouch example the past ten years easily fits into a single 300-line
recognizer. Mouse gestures form a separate category. I simply do not see
multiple recognizers as a big deal - I would much rather have a nice, working
interface on my computer.

> There are still some details I need to work out before I can formalize
> this more, but I would value some feedback on whether this is sane or
> not.


I think it is a bit on the insane side, but hey, it should be fun too. :-)

Henrik



Follow ups

References