Summary: Enemies of usability claim that because 'the experts disagree,' they can safely ignore user advocates' expertise and run with whatever design they personally prefer.
A reader recently sent me the following message, seeking my advice on a common quandary:
I read your article today ("Guesses vs. Data as Basis for Design Recommendations") after an awful workplace bicker with subject matter experts about how their content failed to meet the Web readers' needs.As a Web manager for a small state government agency, I am constantly frustrated by content owners' subjective opinions, which they fling at me while stubbornly refusing my suggestions:
- "Yeah, see, I don't like that."
- "I wouldn't click there, and so neither will they."
- "Oh, they'll know what that means, even if you don't."
When I recently cited one of your articles to support my stance, I was told "there's always evidence to support any opinion." Giving them data seemed to not impress them one iota. They seemed annoyed that I found data to support my argument and were not swayed.
Do you have any other suggestions on how to get colleagues — who are not usability experts — to trust me with their content? How do I convey that what I do is a profession and skill set? How do I gain my colleague's respect as their Web content strategist?
Sadly, this situation is common not just in government agencies, but also in most big companies. There are several different issues at play here; each requires a specific remedy.
Differences Between Users and Project Members
The first malady here is that content owners are relying on their own opinions and preferences. The primary cure is to point out that these subject-matter experts are completely unrepresentative of the target audience on almost every possible dimension:
- Because they're in charge of a particular topic, they obviously know much more than the target audience about all aspects of it, from background knowledge to specialized terminology.
- As insiders working in your organization, they're also aware of how you structure the domain and each department's responsibilities. (Similarly, product managers know not just about their own products, but also how they relate to the company's overall product line.) This is the root cause of many of the IA failures we see in testing websites — and intranets.
- As professionals — usually with college degrees — they might be smarter and better educated than many people in the target audience. This certainly varies across websites, but it's typically true for sites that target a broad consumer audience, senior citizens, and many recipients of government services, such as welfare benefits.
- Sometimes, content owners are also more tech savvy than the audience, with a better understanding of computers and Internet concepts. This is analogous to application design, where developers often have far greater technology skills than users.
- Finally, because it's their project, content owners are much more motivated to care about the content than the users. And, the less motivated people are, the more likely they are to skim text in an attempt to extract only the most useful information.
For all of these reasons, it doesn't matter what content owners themselves like or understand; the behavior of real users is likely to be completely different.
Luckily, these differences are fairly easy to explain to anybody who's willing to be objective. Better yet, each difference flatters subject-matter experts by emphasizing their superior knowledge.
Seeing Is the Only Way to Convert Unbelievers
Once you've successfully argued that content owners can't project their own preferences onto the target audience, you're left with a question: How should you judge usability?
Although it's good to cite external research, the sad fact is that nothing is as persuasive as testing your own users. Even if numerous outside studies have identified a certain phenomenon, many people won't be convinced until they've seen it for themselves.
That's one of the main reasons I always recommend running your own user tests, even if you're designing a fairly simple website and virtually all of your findings will replicate the published literature. Seeing is believing, and most skeptics will leave the lab highly motivated to change their ways (and, more importantly, to change the site so that it finally works).
This is also why you should move heaven and earth (or at least serve free pizza) to get all stakeholders to observe a few user sessions. You can also flatter them some more by explaining that it will be difficult to correctly interpret the study results without them. (That's not a lie, but the main reason to invite them is so that they'll believe the study findings.)
Simple user studies are cheap, and it's almost always worthwhile to spend a few days observing a handful of representative users working their way through your content.
Which Data to Trust
It's true that "there's always evidence to support any opinion," but that doesn't mean you should ignore data. After all, some data is clearly better than others.
The main facts about how people read on the Web are extremely well established, and literally hundreds of studies have reproduced our original findings over the past 12 years.
The same is true for all of our usability guidelines: most have been confirmed by other independent studies. Anyone who bothers to run a study will discover the same thing, because there are no usability secrets — it's simply a matter of looking.
Still, while most usability evidence strongly aligns, there are deviant results to be found. People who don't know any better will stumble across such findings in a Web search and proclaim that "the experts disagree." However true, this is not a license to ignore usability data and follow any random path.
Instead, you should weigh the evidence. On one scale, you have hundreds of studies from experts across industries and countries; they all agree on the big picture, and often document their findings with substantial reports. On the other scale, you have a few deviant postings (plus many guesses, but as previously discussed, you should disregard pundits who don't test their theories with real people). This simple weighing exercise usually tips the scales in favor of the consensus.
Deviant usability findings are typically caused by one of the following:
-
Weak study methodology. Unless you're an expert in usability methodology, this can be hard to assess, but common problems include:
- Unrepresentative users. Academic studies, for example, tend to test students instead of people who better represent the target audience. (See guidelines for recruiting representative study participants.)
- Unrealistic tasks that are overly narrow compared to the free-form roaming that characterizes real user navigation. Such bogus tasks are particularly common in eyetracking studies, which are easier to conduct when you take users directly to the pages that you want to heatmap. But people behave completely differently when they're dumped into a problem rather than coming across a page on their own. (To do eyetracking on a shopping cart, for example, you should first ask users to shop.)
- Biased study facilitation, where the facilitator talks too much and guides users in a way that changes their behavior.
- Statistical flukes. If you rely on a 5% significance level, then 1/20th of the findings are due to random fluctuations and don't represent a real effect. Unfortunately, the 19 studies that correctly confirm existing wisdom are boring, and don't get much coverage. The 20th — invalid — outcome, however, is easily found in a search for hotly debated blog posts.
- Second-tier usability consultants who try to attract attention by being different.
- Unusual circumstances. I designate something as a usability guideline if it holds true for about 90% of designs. So, in 10% of cases, something different happens because that design addresses a problem that's extremely different from the typical case. People always think that their projects are unique. However, 9 times out of 10, it's not different enough for the common usability guidelines to fail.
There's a big intrinsic reward for claiming to find something completely new that contradicts all established wisdom. Seminars sell better when they claim to reveal "secrets" or "all-new, all different" results. I admit that I always hope for new findings in our own research, because I know that they'd make us more money. But year after year, the usability findings remain fairly robust and steady, and I'd rather report the truth than increase revenue. Luckily, there are enough honorable usability experts in the world that the findings of most other reports are similar to our own.
When judging which data to trust, look at the economic incentives. For example, studies of Internet advertising's effectiveness conducted by advertising agencies are inherently suspect compared to those conducted by people who don't care whether you spend more or less money on ads. Similarly, the rewards from being "new and different" mean that those studies that confirm existing knowledge are inherently more likely to be trustworthy.
Building Respect
So far, I've presented all the logical arguments for why people should follow your advice. However, logic will take you only so far. Ultimately, your colleagues must respect your professional expertise so that you don't need to bury them in an avalanche of external research data for every decision.
Respect comes only from proven performance. Once content owners see how much better customers react to websites that are written and designed according to established usability guidelines, they'll start respecting you more. Sadly, this is a chicken-and-egg situation: you get to demonstrate the value of your advice only if it's being implemented.
This is why it takes some time to build respect. There are two ways to incrementally improve the situation:
- Do use logic to get some of your arguments accepted. Logic won't win the day, but in most organizations, it's not completely ignored.
- Run user studies and do whatever it takes to get others to observe some sessions. When they see first-hand that you're right this time, they'll believe you a bit more next time.
If you have the budget, a third approach can help as well: bring in an external consultant or prod your colleagues to attend a usability seminar (say, our UX Basic Training :-) When they hear internationally recognized authorities say the same as you, there's a better chance that they'll listen to you in the future.
This is a hill-climbing process. You can't go from contempt to respect in a day, but you can gradually build respect by continuously doing your job well. This is very similar to the way an organization as a whole builds usability maturity: one step at a time.
Share : Twitter | LinkedIn | Google+ | Email