Involving Stakeholders in User Testing

by Jakob Nielsen on May 24, 2010

Summary: Besides usability specialists, all design team members should observe usability. It's also good to invite executives. Although biased conclusions are possible, they're far outweighed by the benefits of increased buy-in and empathy.

For decades, I've recommended inviting all design team members and several levels of management above them to attend usability sessions. In fact, one of the main reasons to invest in a usability lab as a company progresses up the UX process maturity scale is the ability to host more observers without intimidating test participants. (But with only a few observers, any room where you can close the door can serve as your test facility.)

In our course on how to do your own user testing, we spend a fair amount of time discussing how to manage these observers, given that they don't know much about usability. That's indeed an important question. At one of my recent conferences, however, an audience member asked an even more important question: Why invite these "outsiders" to the test in the first place? Wouldn't it be more efficient to just run the test yourself without having to worry about developers, executives, and the like?

Most of the time, it probably would be more efficient to leave a single person in charge of all usability activities and let everybody else stay in their office.

The main exception is the common case of testing buggy pre-release software. In such cases, it's handy to have a few developers around to help you overcome the inevitable crashes. (Remember, it's important to test the early builds: the more primitive the better, because early usability feedback has a much higher impact on the final product than findings that arrive too late to be implemented.)

Building Team Awareness

In real-world organizations, usability process efficiency is not measured in terms of staff-hours spent per amount of data collected. It's measured by staff-hours spent relative to the degree of product improvement.

Collecting buckets of user data will do no good if the resulting recommendations are not followed.

The main reason to invite team members to observe usability sessions is that it vastly increases the acceptance of the usability findings. Seeing is believing. And seeing it live is even more powerful.

True, we can record videos of customers having problems with a website and show clips from these videos in meetings. Doing so does have an effect, and it's definitely something we do in our seminars to increase the impact and memorability of our findings.

However, it's vastly more powerful when team members have seen those same usability problems for themselves during live test sessions with users. It's not really that they suspect us of faking the test videos. (Which you should never do, of course; once caught with phony data, your future "findings" will lack all credibility.) Actually being there while something happens has a bigger impact than seeing a recording after the fact, which itself has higher impact than simply reading about it.

Having team members present during usability sessions has many benefits:

  • Credibility. Because they've seen how you derive insights, they'll believe your usability findings and reports (vs. thinking you made them up or are just offering your personal opinions or preferences).
  • Buy-in. In addition to inviting team members to observe, you should also invite them to a debriefing to discuss what happened in the test sessions and to help draw the early conclusions. When people participate in the analysis, they're more likely to accept and act on the recommendations.
    • Note: This is not just a gimmick to enforce your design advice. The actual findings will be better when a group with broader expertise helps you analyze the observations. Plus, each additional pair of eyes will observe something extra.
  • Memorability. It's hard to remember findings that you've only seen presented in bullet points or read in a long report. It's easier to remember findings when you can relate them to your personal experience of observing some of the user sessions that generated the findings.
  • Empathy. Seeing nice people suffer under your design is a powerful motivator to make it right. Also, the excuse that "only stupid users would get this wrong" isn't used (even subconsciously) by team members who've heard those users make articulate and perfectly reasonable requests for a design that suits their needs.
  • Fewer design mistakes. When designers and developers have seen their actual customers, they're less likely to go overboard with design ideas that aren't going to work for users. The better the raw UI, the fewer fixes will be needed after the next round of user testing.

Inviting executives to attend user testing has many of the same benefits. They're more likely to prioritize user experience after experiencing users. And they're less likely to believe bogus ad-agency claims that souped-up designs "promote the brand" when they've heard how harshly their paying customers curse such designs. Finally, when it comes time to allocate next year's budget, you can't help but get more if management understands what you do — which they will because live user sessions lodge themselves so firmly in the mind.

Risks of Partial Observations

It's your job to do the user testing. Other members of the design team have plenty of other things to do, and executives are even busier.

You can often count on the actual UI designers to attend the entire usability study when testing their own designs, but even they might come for only a session or two during competitive tests. On the other hand, the marketing manager might come for all of the competitive sessions, but bow out before the fifth iterative test of a feature that's proving hard to nail.

Most team members won't have time to attend all of the user sessions. That's okay. Be sure to note in the invitation that people are welcome to come and observe for one or two sessions if that's all they have time for. However, when people haven't attended the complete study , there are some problems to look out for:

  • Premature conclusions from a partial sample. It's enough to test 5 users to get a good idea of the main usability insights. But it's not enough to test 1 or 2 users if you want to identify true trends and patterns. Anyone who sees only a few users might misidentify the main usability issues and won't have enough insight to analyze the solutions correctly.
  • Memory biases. As discussed further in our seminar on The Human Mind and Usability, our memory is subject to many biases and is pretty poor at remembering abstractions. The beauty of observing user sessions is that usability issues become more memorable because you see them happen first-hand. The danger is that you'll remember the usability problems you actually saw yourself much better than those you only read about in the report from the full study.
  • Entertainment bias: a variant of memory bias, where it's easier to remember particularly striking or outspoken users than the more quiet ones. To enhance the business value of your site or product, test representative customers, not entertaining ones. And emphasize to observers that user testing is not show business, so "boring" users may be just as important.
  • Empathy biases. Similarly, designers, developers, and managers will subconsciously feel that it's more important to cater to those users they've seen than to meet the needs of other — equally important — user segments that they've only heard about.

These problems are inevitable, but you can alleviate them once you're aware of them, as well as warn the other stakeholders about them. In any case, the downsides are far outweighed by the benefits of inviting colleagues and executives to observe as many user tests as they can. For most of them, it's the only chance they ever get to see a live customer in the flesh, so they'll be grateful to you, which is one more reason to do it.

Share this article: Twitter | LinkedIn | Google+ | Email