multi-touch-dev team mailing list archive
-
multi-touch-dev team
-
Mailing list archive
-
Message #00391
Re: GEIS input restriction requirements
On 09/13/2010 02:49 PM, Stephen M. Webb wrote:
> Hey multitouch folks
>
> I'm looking for input on input constraints for the new GEIS
> specification.
>
> OK, I'll start by describing what an input constraint is (and please, if
> you have a better name speak up now). A gesture input device by itself
> just gives a stream of position data. The various levels of the gesture
> stack interpret these data and possibly project them into a display
> space, such as a screen or a sub-region of the screen such as a window.
> Given future technology, this may possibly be a 3-dimensional region of
> space or something I can't even image.
I think this is too abstracted, in the sense that it is not looking straight at
the problem.
> When an application subscribes to receive gestures, it rarely cares
> about gestures from all windows, screens, for spaces. Generally, it
> will only want to receive gestures that have been projected into the
> feedback space (that is, display, window, volume, hacienda) over which
> it has control.
>
> So, a requirement of the gesture API is to be able to specify a space as
> a constraint to which gestures will be received. To satisfy portability
> requirements and work towards future-proofing the API, I am looking for
> your input on the best approach to specifying such constraints
> programmatically. Keep in mind that although we're using X11 on
> 2-dimensional displays right now, there is a very real possibility of
> supporting non-X displays (like on portable devices using EGL) and
> 3-dimensional interaction like with a glove, Wiimote, or ultrsound
> detector.
Specifying a value space of interest to an application is one thing, but what
about the interplay between what one application wants and another application
wants? Who gets the events? Being able to specify enough in the API such that
conflicts like "app A wants gestures with one and two fingers and app B wants
gestures with two and three fingers" can be resolved should be of high
importance in the design, IMO.
> The current GEIS implementation is using a enumeration (with a single
> value, meaning X11) indicating the type of input constraint and a
> variant record containing type-specific data (the X11 windowid). Surely
> there is a better way.
Most likely. I believe the abstractions here depend on the details of actually
running useful gestures on a daily basis. Hence, I believe this work really
depends on, for instance, ginn being implemented and tested.
Cheers,
Henrik
References