← Back to team overview

ubuntu-phone team mailing list archive

Re: [Design/UX/Apps] Ubuntu behaviour factors

 

W dniu 17.03.2013 09:05, Robert Bruce Park pisze:
> On 13-03-16 01:46 AM, Michał Sawicz wrote:
>> Case in point - notifications - default behaviour for the desktop
>> is to have them non-interactive, click-through. That won't be good
>> enough for a phone, where that bubble could take a significant
>> portion of the screen, and there's no way to interact with what's
>> behind them. It should be possible to dismiss them. On a tablet,
>> however, this might be dependant on whether there's a pointer
>> device.
> 
> I think this is actually a perfect case *in favor* of my argument,
> though. Consider this:
> 
> The notifications as they currently exist on the desktop become
> transparent when moused over, and pass clicks through them. You can
> take that exact same widget, add a little bit of code that says
> "dismiss myself when a finger swipes over me in a certain direction"
> 
> If you run that widget on a normal phone, the user doesn't have a
> mouse, so they never see the click-through effect because they are
> physically incapable of performing a mouse click -- even though the
> code to pass mouse clicks through is still present. Phone users then
> see the notifications and can dismiss them with a finger swipe.
> 
> On a tablet, you get the same thing -- the notification can be swiped
> away with a finger, or clicked through with a mouse. If the tablet
> user happens to have a bluetooth mouse, they can mouseover the
> notifications to dim them, click through them, or they can put their
> finger on the touchscreen to swipe it away.

Yeah, if we can pull it off like that (and UX design agrees), I'm all for.

> You are probably right that there are going to be corner-cases, but
> the larger point I'm trying to make is that there are probably many
> fewer corner cases than you actually expect. Just write the widgets to
> respond to different input devices in the correct way, and everything
> else will fall into place.

That very well may be - I do hope that there will only be a minimal set
of places where we do differentiate, I just wanted those few not to rely
solely on the abstract / arbitrary notion of a form factor, when in fact
the cause is a lot simpler.

> I'll give an example of something that failed to do this currently --
> if we forget about the Touch image for a second, and remember back to
> when we were running the desktop images on the Nexus tablets, there
> was a hilariously terrible situation in which if you tried to scroll a
> window with touch gestures, what actually ended up happening was that
> Ubuntu would interpret this as a mouse click+drag, and perform a text
> selection rather than a scroll action. This is a case of those widgets
> not being written to understand the difference between what a
> touchscreen is, and what a mouse is, and was trying to map touchscreen
> input onto a set of assumptions about how a mouse is supposed to be
> used. Don't do that. All of the widgets in the SDK should fully
> understand what a touchscreen is, and what a mouse is, and should be
> able to Do What I Mean regardless of what input device I'm actually
> using. That means it knows to select text on a click+drag, and it
> knows to scroll on a touch-swipe. The same widget can accept both of
> those inputs, and then you don't need to write any special-casing code
> to figure out whether you're on a phone or a TV or anything, you
> simply do the right thing in all cases.

+1! You did bring up an important thing here - touch != pointer.

>>> Again, don't do things "only when there's a means". Provide all
>>> input options simultaneously and then the user will simply choose
>>> whichever one is the easiest to use given the input devices that
>>> they have access to. I think this can be done in a very seamless
>>> and transparent way -- eg, a button will look identical whether
>>> you are expecting it to be clicked on, touch-tapped on,
>>> keyboard-navigated to, or TV remote selected... and regardless of
>>> which input device is used to activate the button, the button
>>> activation will be identical anyway.
> 
>> Sure, that probably is one example where you can have that
>> implemented and it simply won't be used. But the notion of focus is
>> limited to text entry fields.
> 
> That's a shame, because there was a day when that wasn't true. Made it
> pretty easy for keyboard-only use of a desktop, as you could simply
> tab around between different widgets and activate different buttons
> and things without having to know all kinds of cryptic shortcuts.

Wait, I never said we won't do keyboard navigation :D. This is still a
must, for accessibility purposes if not anything else.

> Thanks for reading my rant, apologies for the length ;-)

Don't :), that was kind of the point of this thread :)

Thanks,
-- 
Michał Sawicz <michal.sawicz@xxxxxxxxxxxxx>
Canonical Services Ltd.

Attachment: signature.asc
Description: OpenPGP digital signature


References