Hardware Specs vs. User Experience

by Jakob Nielsen on November 5, 2012

Summary: Product quality has to be judged in the context of human tasks, and reviews should emphasize real use—not raw numbers.


I've been disappointed by several recent products reviews, which mainly focused on hardware specifications without assessing how hardware elements impact the total user experience.

In many cases, the actual hardware is less important than the software and the quality of the hardware–software integration.

As an analogy, let's say that the only information you're given about two football players is how much weight they can bench press. If you had to bet on who was the better football player, you'd surely pick the guy who could lift the most. After all, it's better to be a strong athlete than a weak one. Going solely by bench press numbers, however, is actually a terrible way to pick quality players. You'd also want to know the players' speed, agility, and endurance. And, most important, you'd want to know how good they are at playing the game.

Hardware (bench-pressing strength) is less important than software (game-playing ability).

Let's look at two recent tablet launches: the Microsoft Surface and the Apple iPad Mini.

Microsoft Surface Screen Quality

Which has the better screen: Microsoft's new Surface RT tablet or Apple's 4th generation iPad? Most reviewers did a simple feature comparison of pixel counts and pixel density; Apple won on both of these specs.

Wired was the one honorable exception to this rule: its reviewers conducted a simple usability experiment comparing the actual user experience of the two displays. After using cardboard to block off logos and other identifying information, the reviewers let 9 users try both tablets in a within-subjects experiment. The results were:

  • 100% preferred reading a Web page on the iPad
  • 67% preferred watching a video on the MS Surface

Sadly, the reviewers didn't measure reading speeds or other human factors metrics. For sure, it's much more expensive to conduct quantitative usability studies. As an example, we tested 32 users in our study of tablet reading and still couldn't reliably conclude whether iPad or Kindle offered the fastest reading speed. Considering how many billions are at stake in the tablet race, it would be nice if somebody would invest the (comparatively small) amount needed to get true data on what works best for users in real use.

Interestingly, based on Wired 's limited data, we can conclude that iPad is best for reading, whereas Surface is best for video.

Even though I'm downplaying hardware specs in this article, I must admit that the iPad's win for reading is exactly what I would have predicted given the specs. We know that higher pixel densities support faster and more pleasant reading. And that's what happened here.

That said, I wouldn't have predicted that Surface would be better for video. Given the available human-factors research on video quality, the higher-density display should have won. And yet the iPad lost this part of the study. Of course, the fact that surprising results happen is exactly why we conduct actual research. Microsoft's marketing claims that the Surface screen has superior color reproduction; if so, that could explain the study outcome.

Bottom line: specs like PPI (pixels per inch) and color fidelity must be judged in the context of real user tasks.

iPad Mini: Touch Usability

The iPad Mini has the same number of pixels as the iPad 1, but concentrated in a smaller 7.9-inch screen. Reviews have expended considerable word count discussing the implications of the Mini's 163 PPI vs. the iPad 4's 264 PPI.

I would certainly expect reading to be slower and more unpleasant on the lower-density screen, as the theory predicted. (Though again, it would be nice to have real reading-speed data to go by.)

But other usability issues have been mainly overlooked in the reviews I've seen. The problem with pretending that the iPad Mini is a reduced iPad 1 is that touch-screen interactions become harder when the same user interface elements become physically smaller. Our fingers don't shrink and our movements don't become more precise just because we're dealing with a concentrated screen.

iPad applications must be redesigned for the smaller screen or they'll be harder to use. It's that simple.

Similar issues arise in Web browsing: a design that works fine on a full-sized iPad might be difficult to use on a smaller screen if the presentation simply shrinks everything down.

Our usability studies of the Kindle Fire documented a host of problems associated with user interface design's Incredible Shrinking Man theory. It failed for the Fire, it will fail for the iPad Mini.

(While I didn't include The Incredible Shrinking Man in my list of top-10 movie bloopers, that film — and similar works like Honey, I Shrunk the Kids — is based on a fallacy: you can't get a viable biological creature by rescaling a living being. After all, an elephant is not a mouse that's been scaled to 6,300% in Photoshop. It needs thicker legs, different lungs, etc. Just like you need to redesign the content and the interactions when you move between big and small screens.)

The usability guideline for buttons and other touchable elements is that they should be at least 1 × 1 cm. We don't talk about the pixel counts of buttons for a reason: the question is not how touchable items look, but rather how big they are relative to a human finger.

Real Use, Not Numbers

It's easy to write reviews that focus on specifications. It's easy to create comparison tables when all they list is numbers.

But what's important is how a design supports real use cases. Typically, component integration is more important than the raw power of each individual component. And often, the software user interface impacts users more than the underlying hardware. The ultimate test of a product comes when humans confront it, not from a listing of its specs.


Share this article: Twitter | LinkedIn | Google+ | Email