Designing for a Touch World
In this series I plan to explore some of the issues of Touch UI’s, this is as much a documentation of my learning experiences as it is anything else. The content will be based primarily on observation and conjecture so while I find the information helpful it is up to you to verify its suitability to your situation.
If you have contradictory viewpoints, additional ideas/information, or real ‘study results’ please chime in in the user content section – the more we talk about these things the better our UI’s will get.
Ok, on to the first installment…
Smart phone touch
When you think of modern smartphones you probably expect a Touch Screen. Seems that touch would be an easy medium to design for, after all your user can see and interact directly with the content.
As it turns out this isn’t actually the case…
Fingers are big, fat, clumsy pointing devices and the point of contact between your finger and the screen is not at all easy to determine. Worse, once you touch the screen you completely obscure whatever it was you were trying to interact with. People tend to judder and bounce (especially on the train or tram) and it is very easy to end up with accidental touches. Add this to the seriously limited screen space, potential for unwanted palm contact, and the extremely limited range motion available (try using a mobile device one handed) and it begins to seem that creating a good experience is almost impossible.
Thankfully this extreme is also not the case.
I think mostly because they are obscuring the screen (or more importantly desperately trying not to obscure the screen) most people seem to touch the screen slightly below their point of attention. Android devices seem to struggle a lot with taking this into account, Apple’s iPhone on the other hand excels at it. At times the iPhone seems almost psychic in its ability to correctly identify the part you were trying to touch. So this variation is something to take into account when developing UI’s on these mobile platforms… on Android aim to have a bigger touch target and require less precision.
Side Note: In some ways it is a shame that all the excitement over capacitative touch screens and finger input has completely sidelined the stylus… sure the stylus isn’t ideal for everything but for some tasks it beats the hell out of fingers and has the added benefit of mostly leaving the screen visible. I for one am hoping to see the stylus make a return at some point in the near future.
The availability of touch.
There are only a limited number of touch gestures available. More with multiple fingers obviously but the more fingers your are trying to interpret the great the chance you will get incorrect inputs.
What are the types of touch input you can reasonably expect from a user using one finger and holding the phone in their hand at the same time?
-
Tap
This one is pretty obvious to most users, touch and release without movement (or at least very little movement – no one can hold perfectly still). Contact times vary depending on the user – some people press firmly and hope for some feedback, others stab at it quickly and with force hoping for a better result, some touch it as if it is fragile. Your UI probably shouldn’t discriminate between these actions.
-
Swipe
Touch and drag the finger across the screen in a more or less constant line. Also pretty much an expectation.
The easiest swipes are in order: sweep an approx 1/4 arc (for right-handers from mid-right to top-left or vice-versa), top-to-bottom, left-to-right, right-to-left, bottom-to-top.
Other swipes such as on the diagonal are possible but markedly harder to perform.
-
Drag
Touch and drag the finger across the screen in a variable line… potentially with many changes in direction. Obviousness depends on the context but with appropriate cues most people will get it.
-
Flick
The ‘flick’ is also fairly intuitive, however it seems that when a user is about to flick the screen they subtly change their grip on the device leading to a difference between the Flick and the Swipe.
Ease of flicks in order: top-to-bottom, bottom-to-top, left-to-right, right-to-left (harder). Other flicks are possible but non-obvious and require much more dexterity and conscious attention.
-
Long Press
Less obvious but once learned becomes second nature. The press, hold, release without moving the finger to get a long press. Expect to have to explain to the users how this one works if you are overloading the Tap gesture on the same ui element.
-
Circulate
If you figure that most people holding a device one handed will be interacting with it using their thumb another relatively easy gesture is to touch and circulate your finger as if rotating around a clock face. This one doesn’t see much use and is less obvious than the above. For right-handers anti-clockwise motion is marginally easier. Expect to have to explain to the users how this one works.
-
Others
You might imagine other things such as double-tap but due to the screen obscuring issue getting in the way of user feedback such gestures are less reliable/obvious. You can of course use them, but think long and hard first.
Next time I’ll look at multi-touch gestures, and then follow up with some information about using these gestures and what the user means when they touch your screen.
Computing is personal… touch even more so 🙂
You must be logged in to post a comment.