← Back to team overview

multi-touch-dev team mailing list archive

Re: Peter Hutterer's thoughts on MT in X

 

From our Windows 7 application experience, I would like to echo the need for sophisticated applications needing their own gesture recognition capability and access to raw touch data.

For example SpaceClaim Engineer (a multi-touch CAD app on Windows) has dozens, perhaps going on hundreds, of unique gestures it recognizes.  They also use combinations of pen & touch in innovative ways which motivates them to want raw HID data from both touch and pen

There won’t be a lot of these applications, but the ones that do will really show the advantages of multi-touch (and pen) and should be supported IMHO.

James Carrington
N-trig




From: multi-touch-dev-bounces+james.carrington=n-trig.com@xxxxxxxxxxxxxxxxxxx [mailto:multi-touch-dev-bounces+james.carrington=n-trig.com@xxxxxxxxxxxxxxxxxxx] On Behalf Of Mark Shuttleworth
Sent: Wednesday, October 06, 2010 7:51 AM
To: Peter Hutterer
Cc: multi-touch-dev
Subject: Re: [Multi-touch-dev] Peter Hutterer's thoughts on MT in X

On 06/10/10 02:45, Peter Hutterer wrote:


This smacks of the old X inability to make a decision and commit to a

direction.



[citation needed]

From http://www.faqs.org/docs/Linux-HOWTO/XWindow-Overview-HOWTO.html

"One of X's fundamental tenets is "we provide mechanism, but not policy". So, while the X server provides a way (mechanism) for window manipulation, it doesn't actually say how this manipulation behaves (policy)."
;-)



But I predict that sooner or later, we'll see a second and third

engine emerge, maybe for an app that needs really specialised gestures.

I agree, and I can think of use cases that support that, for example CAD applications.

Where I disagree with your speculation is the idea that it would be a good thing to support multiple gesture engines for different tookits. By definition, a toolkit is general-purpose, and maps to a whole portfolio of apps. XUL, Qt, Gtk are examples. Having different engines there means that whole sets of apps will behave differently, depending on their toolkit. And it means that improvements, such as latency and signal processing work, are diluted across all the toolkits - bad for the user.

That's quite different to the idea that a CAD app might invest in some highly specialised and unique gesture processing. As soon as it's "competing toolkit engines" you're in a world of user pain.

We've seen this before, and since this is a new area we can avoid it. We ship too many spelling checkers already :)




We can be very clear about this: Ubuntu won't support

multiple simultaneous competing gesture engines.



How does this work out if an application decides to interpret raw touch data

into gestures by itself? That would be a competing gesture engine then.

If the use is defensible, encourage it, if it's not, patch it out or deprecate the app.

Mark

Follow ups

References