First Rule of Usability? Don't Listen to Users

by Jakob Nielsen on August 5, 2001

Summary: To design the best UX, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. Users do not know what they want.


In past years, the greatest usability barrier was the preponderance of cool design. Most projects were ruled by usability opponents who preferred complexity over simplicity. As a result, billions of dollars were wasted on flashy designs that were difficult to use.

One of the main advantages of the "dot-bomb" downturn is that cool design has suffered a severe set back. Companies are now focused on the bottom line:

  • Public websites, which formerly focused on building awareness, now aim at making it easy for customers to do business.
  • Intranets are similarly refocused on improving employee productivity. Many companies are attempting to create order, impose design standards, and enhance navigation on previously chaotic intranets.

Happily, glamour-based design has lost and usability advocates have won the first and hardest victory: Companies are now paying attention to usability needs.

Unfortunately, winning a battle with usability opponents doesn't win the war with complexity. It simply moves us to a new front line: The battle is now to get companies to do usability right.

Watch Users Work

Too frequently, I hear about companies basing their designs on user input obtained through misguided methods. A typical example? Create a few alternative designs, show them to a group of users, and ask which one they prefer. Wrong. If the users have not actually tried to use the designs, they'll base their comments on surface features. Such input often contrasts strongly with feedback based on real use.

For example: A spinning logo might look pretty cool if you don't need to accomplish anything on the page. Another example is the drop-down menu. Users always love the idea: finally a standard user interface widget that they understand and that stays the same on every page. However, while they offer users a sense of power over the design, drop-down menus often have low usability and either confuse users or lead them to unintended parts of the site.

To discover which designs work best, watch users as they attempt to perform tasks with the user interface. This method is so simple that many people overlook it, assuming that there must be something more to usability testing. Of course, there are many ways to watch and many tricks to running an optimal user test or field study. But ultimately, the way to get user data boils down to the basic rules of usability:

  • Watch what people actually do.
  • Do not believe what people say they do.
  • Definitely don't believe what people predict they may do in the future.

Say, for example, that 50% of survey respondents claim they would buy more from e-commerce sites that offer 3D product views. Does this mean you should rush to implement 3D on your site? No. It means that 3D sounds cool. The world is littered with failed businesses that banked on people's attitude toward hypothetical products and services. In speculative surveys, people are simply guessing how they might act or which features they'll like; it doesn't meant they'll actually use or like them in real life.

When and How to Listen

When should you collect preference data from users? Only after they have used a design and have a real feeling for how well it supports them. Jonathan Levy and I analyzed data from 113 pairwise comparisons of user interfaces designed to support the same task and found a 0.44 correlation between users' measured performance and their stated preference. The more a design supports users in easily and efficiently doing what they want to do, the more they like the design. Very understandable.

(Update: in newer research, I found a correlation of r=.53 between users' performance and preferences for websites. Higher than for the PC applications that had a .44 corrlation, but still low becuase this shows that you can only predict about a quarter of how well a design works from knowing how much users say they like it.)

However, when collecting preference data, you must take human nature into account. When talking about past behavior, users self-reported data is typically 3 steps removed from the truth:

  • In answering questions (particularly in a focus group), people bend the truth to be closer to what they think you want to hear or what's socially acceptable.
  • In telling you what they do, people are really telling you what they remember doing. Human memory is very fallible, especially regarding the small details that are crucial for interface design. Users cannot remember some details at all, such as interface elements that they didn't see.
  • In reporting what they do remember, people rationalize their behavior. Countless times I have heard statements like "I would have seen the button if it had been bigger." Maybe. All we know is that the user didn't see the button.

So, do users know what they want? No, no, and no. Three times no.

Finally, you must consider how and when to solicit feedback. Although it might be tempting to simply post a survey online, you're unlikely to get reliable input (if you get any at all). Users who see the survey and fill it out before they've used the site will offer irrelevant answers. Users who see the survey after they've used the site will most likely leave without answering the questions. One question that does work well in a website survey is "Why are you visiting our site today?" This question goes to users' motivation and they can answer it as soon as they arrive.

Your best bet in soliciting reliable feedback is to have a captive audience: Conduct formal testing and ask users to fill out a survey at the end. With techniques like paper prototyping, you can test designs and question users without implementing a thing. Following these basic usability rules and methods will help you ensure that your design is truly as cool as it looks.

Reference

Nielsen, J., and Levy, J. (1994). Measuring usability — preference vs. performance. Communications of the ACM 37, 4 (April), 66-75.


Share this article: Twitter | LinkedIn | Google+ | Email