I'm seeing a disturbing trend in which user studies are geared to entertain rather than to reveal knowledge about the target designs. The people running such studies have the best of intentions: they want to
show their clients well-known usability problems in a captivating way
, and thereby gain support for design improvements.
Their philosophy holds that most Web managers are clueless about users, so the real goal of usability testing is simply to educate management.
However, if you stack your test to demonstrate something you already know, you
degrade your study
in two ways:
It's intellectually dishonest to run a study in order to generate specific findings. It's very easy to
the study so that the findings are not even real.
When you know what you want to find, it's easy to
overlook other issues
that might turn out to be the design's biggest problems. In my experience, when clients ask you to study one aspect of their website (say, the navigation), the real problem is often something different (say, the content). It's therefore essential to keep an open mind during user testing and
expect the unexpected
Some common mistakes of entertainment-focused user testing include:
Screening for outgoing, articulate test participants
so that you can get more elaborate quotes and comments. While it's certainly more engaging for observers to watch users who talk a mile a minute, it's also potentially biasing. Out of the 233
guidelines for recruiting test participants
, the number one guideline is to get a
representative sample of customers
. If you're trying to sell, say, networked storage to enterprise customers, you need to test a broad range of system administrators. You can't just screen out introverted nerds, since they could be the key influencers and might well behave differently than more easygoing sysadmins.
Focusing the test on the site's sexy parts
, while skipping parts that require users to read detailed descriptions of products and services. But such boring behaviors often close the sale, particularly for
where detailed product specs and whitepapers are a key part of the user experience.
Prompting users to talk
in great detail about the problem your design team is currently debating. Yes, it's tempting to get input for your next decision. Such feedback isn't reliable, however, if you've turned users' attention away from their tasks to grill them about multiple, alternative design ideas. You have to watch users
not listen to them talk
about hypothetical designs they aren't using. It's fine to test two or three alternative
that you've quickly mocked up. It's not fine to ask users to speculate on "how would you like it if the site did this...," because what people say has no relation to what they'd do if they actually used such a user interface.
If you find yourself running tests with an eye on how they might play in the observation room, you're probably doing something wrong. You're definitely not getting the data that your team depends on you for (or that your client is paying you for).
Usability As Showbiz
All that said, there are aspects of usability that require a more popular approach. Irving Berlin was wrong in saying that
"There's No Business Like Show Business,"
because all business is show business. In a business environment suffocated by
, you must wow and persuade people to keep their attention on your message.
When you're selecting
highlights from usability study videos
, you shouldn't include a 10-minute clip of a user visiting 20 erroneous pages. Show the first one or two wrong clicks, then throw up a transition title that says "nine minutes later...," and conclude with the clip where the user says, "This is a horrible website. I can't find anything. I'm never coming back."
Two entertaining minutes beat ten boring ones in a usability presentation. Not only will your team members and executives pay more attention to your findings, they'll also be more likely to attend your next usability briefing.
To heighten drama, cut out the boring parts. Yes, these many accumulated mistakes are what really teach you
the user couldn't find anything, but there are key differences in how you discover, present, and document problems.
To present findings
, it's okay to emphasize material that engages and motivates your audience. For example, don't show a bunch of clips of your most introverted participants.
To discover findings
, however, you can't bias your study and skip directly to the good parts for three reasons.
First, if you don't let users go through those 20 wrong page views, you won't learn as much about the flaws in navigation design.
Second, if you guide users past "boring" pages, you won't learn how users interpret those parts of the site.
Third, if you take users directly to the site's "interesting" parts, they'll arrive at those areas with a substantially different
and attitude than people who've had to struggle through page after irrelevant page to get there. Users are more inclined to quickly leave a section if they've struggled to reach it. In contrast, users easily guided there will have a more positive attitude and will dig harder; the section will (falsely) appear to be more successful as a result.
, your report should focus on the importance of the usability issues for the overall business, not on which issues generated the most colorful quotes.
Ultimately, if you run your research as an entertainment franchise, you'll weaken your findings and compromise product improvement. All the usability problems you missed due to poor testing methodology will remain in the interface, and your executives will conclude that usability doesn't have as
high an ROI
as Ive been promising them.
In the long run, money talks, and you optimize ROI by emphasizing unbiased research and by
. Usability's role in a design project is to be the
source of truth
about what really works in the world, as opposed to the team's hopes for what might work. The less thoroughly you uncover the truth, the less your company will need you in the long run.