In the print world, paper choice is critical to understanding what kinds of design can be supported. For example, try putting an embossed foil stamp on vellum. On desktops, designers know they have a pristine canvas (the monitor) to work on. Such a canvas is also guaranteed to be used in well-lit conditions. But monitors support colors differently, which means subtle details can become lost. Tablets bring their own hurdles.
Tablet screens are made out of sturdy glass, which, in bright light situations (such as outdoors), causes glare and can make screens challenging to read. Plus, with all that touching, those fingerprints and smudges can affect how your screen looks. Screens with dark backgrounds drastically enhance the glare and fingerprints, making it hard to read. It can also turn the screen into a mirror, reflecting back the user’s face more often than the text.
Additionally, most people have difficulty reading small white text against a dark background. Many designers don’t take into consideration the eye strain or the jarring adjustments the human eye experiences when it transitions between these contrast patterns. Mark Boulton explains that when designing large amounts of copy to be readable:
…Make sure you increase the leading, tracking and decrease your font-weight. This applies to all widths of Measure. White text on a black background is a higher contrast to the opposite, so the letterforms need to be wider apart, lighter in weight and have more space between the lines.
From the trackball to the optical mouse, the accuracy of these devices have continued to grow, making complex user interfaces easier to navigate. With tablets, our (sometimes fat) fingers come into use and eliminate the level of precision we’ve come to love.
The tap area of an average finger is 45 to 57 pixels in any direction. This should be the minimum dimension for any target area and is larger than even Apple’s Human Interface guidelines suggest. And we’re just talking about fingers; thumbs are even bigger and we actually use them more than the finger. The average thumb is 1 inch, which equates to 72 pixels for the target area.
On mobile phones, we mostly use one hand; therefore it’s critical to keep navigation near the bottom of the screen and within the thumb’s radius of motion. Tablets have larger screens so people tend to hold the device with one hand; use their forefinger; or hold it by both sides.
Some complications arise, though.
Josh Clark explains the ergonomics within the reach of the user’s thumbs:
- For Apple iOS, navigation items should be placed along the bottom rather than the top of the screen.
- For Android, the inverse is true–navigation should be on the top. These devices already have control buttons on the bottom. However, there is a slight problem.
- With the main navigation along the top, the user’s hand and arm will obscure content as it changes. This is the lesser of two evils because stacking control buttons in this case is worse due to “fat finger” problems (i.e., tap targets being too small).
- Placing navigation along the sides of the screen is likely to cause similar problems depending on whether the person is left or right handed.
- For tablets, the upper corners and sides are the ideal location for navigation.
The sheer number of device screen sizes, operating systems, and hardware considerations complicate this to the point that there is no general rule of thumb (pardon the pun) for a single layout solution used for both mobile devices and tablets. Mobile and tablet layouts should be tackled independently so the usability is optimized per device type, and arguably even per operating system.
Creating the visual layer
If the visual aesthetic is a minimalist approach, you’ll be considered in the “flat” camp. If you’re making your design mimic real-world objects, you’re in the “Skeuomorphism” or “realism” camp.
This is the same as the Microsoft vs. Apple debate – each are wildly debated. There are no right answers aside from: it depends. The problem with faux-realistic design is the “uncanny valley” it creates. The problem with flat design is a lack of visual clues indicating an element is clickable.
Sacha Grief’s essay covers the topic well and reveals a marriage between the two from an unlikely source: Google. Google’s design has never been associated with quality, but their product redesigns prove that’s no longer the case. They’ve taken the best of both worlds to create a visual layer that works because of strong typography and efficient, simple layouts with tasteful gradients and shadows. This approach takes the best of both flat and skeuomorphic styles, but none of their downside.
The visual layer is an expression of the designer or company’s brand and everyone wants to put their best foot forward. No matter how gorgeous your design may be, if it’s not usable, it won’t last.
Going beyond visuals
Multi-touch, or gestures, opens Pandora’s box and forces the design to accommodate functions that go beyond one-point touch. LukeW’s Reference Guide shows us seven pages of input methods ranging from one-handed pinch zoom, two-handed tap and drag, two-finger scrolling, device shake, flick, swipe, and more.
But how can the design illustrate functionality that is invisible to the user?
The core to optimized touch design is delivering feedback to the user’s inputs. This means that screen content should follow the user’s inputs to validate, whether their interaction is possible or not. Users often experiment with gestures that “could work” so if the feedback served matches their expectations, the interface wins and the user has been educated. The response time of the feedback is critical. It should happen within milliseconds after the user gesture was performed.
Other techniques include:
Including teaser visuals lets users know there’s more to what they’re looking at. For example, if you’re presenting an image within a photo gallery, show a small portion of the previous and next images adjacent to the primary photo so the user has visual clues to access more content and stay engaged.
To help users find more content or perform the correct gesture, simple animations can be the trick. Just look at the iPhone. To unlock, the user slides the switch from left to right. The arrow points in the direction of motion and the text within the channel explains what to do, but the subtle shimmer that animates it is a great visual indicator to support the entire control.
Also notice the text dims as the switch is sliding, giving the user feedback that he’s doing it right.
Sometimes its necessary to give users helpful info at a critical time in their interaction. These are like contextual help bubbles that appear at just the right time so engagement doesn’t become disrupted. After the user acknowledges the message by tapping or performing the correct gesture, the specific tip should no longer be shown since the user has now been educated.
Splash screens kill kittens
Avoid splash screens with instructions on how to use the interface. Everyone looks for the skip function. If the initial screen requires too much explanation, it’s a strong indicator the layout and/or design isn’t effective. In this app, the splash screen is trying to explain how interact with a magazine (which is something 99.% of people know how to do already). With arrows pointing in every direction, the viewer doesn’t know where to start or which direction to go next. Epic fail.
Touch design and gestures is an exciting new frontier. Touch interfaces rely on the discoverability of touch-targets to be clear and easy to follow. The visual design layer needs to support the usability since the two are so tightly bound in this space. The more gestures align with natural, physical motions, the more likely people’s initial guesses will work.