← Back to team overview

multi-touch-dev team mailing list archive

Re: Fwd: GEIS 2.0 specification

 


 Gentlemen,

I've gone trough the specification for GEIS 2.0. And I think the specification is moving into a good direction. But it seems that the previously described "radical changes" will make a huge part of this document obsolete already again. As I previously mentioned already, the direction described earlier in this thread sound optimal to me. My only concern about that whole stuff is that it might become a heavily over engineered beast, if you try to make it fit every possible use case already from the beginning.

I would for example suggest just to concentrate on touchscreen / touchpad input for the moment. (Unless you already have more specific internal needs of course.)

By having GEIS just as a transformation pipe with a raw-event-input end and a gesture-event-output end i think you might be able to avoid quite a lot of difficulties. The whole region subscription stuff might not be necessary anymore, since a toolkit could only deliver events from the region it is interested in. For us in Qt we might in that case only forward the events to GEIS that we receive from within a certain region. That way GEIS might not need to know about regions at all. Neither we need to have a X window created before creating a GEIS instance etc. - Or am I missing something?

If we further presume that the toolkit (may it be GTK, Qt or anything else) manages all the desired input devices, GEIS could do pure event processing and possibly wouldn't need to know about the input devices it self at all.

I hope my input does not add too much confusion! ;-)

Best Regards,

Zeno Albisser



On 10/08/2010 06:49 PM, ext Stephen M. Webb wrote:
On Fri, 2010-10-08 at 09:40 -0600, Duncan McGreggor wrote:
I believe Stephen already has some changes he wants to make to this
proposed spec, so expect updates :-)
I'm thinking of radical changes, in fact.  This came out of the XDS
decision to move gesture recognition into the client side.

Before I go to far, I want to give a brief description of what I think
the 2.0 API should be like to see if this jives with everyone.

Geis 2.0 will have two ends:  an input end and a recognition end.

The application or toolkit will receive input events from, well,
wherever.  XI2.1 in the case of X11, or native inputs in the case of a
framebuffer-based installation (geis should not depend in any way on X).
To that end, the application or toolkit will need to get an input cooker
from geis, one per input device I imagine, and feed the raw input events
into the cooker in its event loop.  The cooker transforms the raw input
events into internal geis input events, and may require further inputs
such as coordinate mappings depending on the requirements.

The application or toolkit will then have to poll the geis event loop.
This drives outstanding cooked input events through the recognizer and
results in zero or more events being output.  The possible events
include (1) preview events -- the so-called tentative events, (2)
gesture events, just like in geis 1.0, (3) touch events, for touches
that are not recognized as part of a gesture, and (4) user-defined
events, discussed below.  The application or toolkit can do what it
likes with these events.

I would create a default recognizer built on utouch-grail, but the next
step (geis 2.1?) would be to provide a programmable recognizer, somewhat
akin to the shaders available in 3D graphics rendering pipelines.
Applications can load programmable gesture recognizers dynamically, and
can have different recognizers for different screen regions or input
devices.  The programmable recognizers can emit the user-defined events
mentioned above.

There are still some details I need to work out before I can formalize
this more, but I would value some feedback on whether this is sane or
not.






Follow ups

References