← Back to team overview

multi-touch-dev team mailing list archive

GEIS input restriction requirements

 

Hey multitouch folks

I'm looking for input on input constraints for the new GEIS
specification.

OK, I'll start by describing what an input constraint is (and please, if
you have a better name speak up now).  A gesture input device by itself
just gives a stream of position data.  The various levels of the gesture
stack interpret these data and possibly project them into a display
space, such as a screen or a sub-region of the screen such as a window.
Given future technology, this may possibly be a 3-dimensional region of
space or something I can't even image.

When an application subscribes to receive gestures, it rarely cares
about gestures from all windows, screens, for spaces.  Generally, it
will only want to receive gestures that have been projected into the
feedback space (that is, display, window, volume, hacienda) over which
it has control.

So, a requirement of the gesture API is to be able to specify a space as
a constraint to which gestures will be received.  To satisfy portability
requirements and work towards future-proofing the API, I am looking for
your input on the best approach to specifying such constraints
programmatically.  Keep in mind that although we're using X11 on
2-dimensional displays right now, there is a very real possibility of
supporting non-X displays (like on portable devices using EGL) and
3-dimensional interaction like with a glove, Wiimote, or ultrsound
detector.

The current GEIS implementation is using a enumeration (with a single
value, meaning X11) indicating the type of input constraint and a
variant record containing type-specific data (the X11 windowid).  Surely
there is a better way.

Your assignment here is to identify possibly alternatives and suggest
them here.


-- 
Stephen M. Webb <stephen.webb@xxxxxxxxxxxxx>
Canonical Ltd.




Follow ups