unity-design team mailing list archive
-
unity-design team
-
Mailing list archive
-
Message #02009
Re: Improving single click for all GTK+ apps to make it suitable as a standard
On Sat, 2010-05-15 at 07:57 -0400, Freddie Unpenstein wrote:
> On the toolkit side, what I would like to see, is more cooking of the
> input events, similar to how terminals and X itself allow access to
> raw keystrokes, the processed/mapped input events, down to the final
> activation of a widget.
>
> In this regard, how about a consistant mechanism across all GTK widgets
> to intelligently process keyboard and mouse events, kind of like the
> three-stage cooking that goes on in commandline terminals:
>
> 1- the existing signals for the original raw mouse and keyboard
> events. (...)
> 2- recognition of the shift/click/drag operation that the user may be
> performing, (...)
> 3- looking up the emissions on stage two in a list of symbolic
> mappings, and re-emitting the resultant "action" as a final "fully
> cooked input" signal. (...)
> A widget, instead of implementing its own keyboard/mouse mapping code,
> could in most cases simply register a set of actions and their
> corresponding default mappings in the third stage processing, and let
> the default (or theme) mappings match those actions to their input
> sequences.
I would love to see a system-wide gesture-to-action mapping and you seem
to be talking about roughly the same.
Gestures would be sequences of stuff that the user does using input
devices (hardware controllers) and actions would be what the software
does in response.
Gestures could include simple clicks and keyboard input, but also
multi-touch gestures and even Emacs-style sequences.
For all GTK+ apps would be a nice start, but cross-toolkit would be even
better.
I wrote more about that back in 2007:
http://thorwil.wordpress.com/2007/04/10/event-to-action-mapping-1/
--
Thorsten Wilms
thorwil's design for free software:
http://thorwil.wordpress.com/