On 08/10/10 16:10, Chase Douglas wrote:
On Fri, 2010-10-08 at 13:37 +0200, Henrik Rydberg wrote:
On 10/05/2010 10:22 PM, Chase Douglas wrote:
[...]
The reason this is useful is to lower the latency when there's
system-wide and application specific gesture recognition to be
performed. It requires a trust of the client applications to not handle
the events as though they owned them, but X trusts client applications
with lots of similar things today without issue.
It seems to me that the idea of tentative events is not so much about latency as
it is about implementing user feedback which helps resolve ambiguities.
It's the latency to the user feedback that's important. In my mind,
these two concepts are one in the same.
Let me give two examples on the latency front.
Say Sarah does a four-touch, but only three fingers register. We'd want
to see a three-touch visual effect instantly, so she can avoid launching
into a compound gesture on the wrong footing, so to speak.
Or say the app supports rotation and Fred places two fingers and starts
moving. We'll want to show, within 100ms, that we're detecting some sort
of rotation. We'd show the extent of the rotation as it approaches a
critical threshold, at which point the 90 degree flip would occur.
In both cases, the feedback is required in sub-100ms, before a real
gesture has kicked off.