Summary: Better to accept a wider margin of error in usability metrics than to spend the entire budget learning too few things with extreme precision.
Last week, I made a slide for the new User Experience (UX) Basic Training course with the recommended number of test users for different types of studies. I like teaching foundational courses because they afford me just this kind of opportunity — to distill 25 years of usability process research into a single table. Patterns crystallize when complex topics are condensed to the essence.
For example, why do we recommend testing more users for card sorting than for usability studies? Because the usual rule, "we're testing the system, not you," doesn't apply to card sorting. When eliciting mental models, we're actually testing the individual users instead of a predefined artifact, and the variability is thus larger.
The thing that surprised me most about my own table: I recommend doing most quantitative user testing with a sample size that typically entails a 19% margin of error.
19% sounds sloppy. How come a fairly low level of accuracy usually suffices in estimating usability metrics?
- A 19% confidence interval pretty much represents the worst-case outcome. Usually, the error is much smaller.
- The average usability difference between websites is 64%, so even in those few cases where we get a 19% measurement error, we'd usually pick the correct winner anyway.
These mathematical points suffice to defend the idea of saving budget and limiting quantitative studies to mid-sized samples.
But there are two deeper arguments that are even more important.
Focus on Big Problems
You shouldn't care about small issues in usability. At this stage, we still have bigger fish to fry. When redesigning a website for usability, the average improvement in key performance indicators (KPI) is 83%. Clearly, most websites still contain horrible usability problems. Intranets and mobile sites/apps are often even worse.
Your focus should thus be on the really big design problems, where your user experience is failing to meet customer needs. Typically, there are only a few issues with immense bottom-line impact. Better to invest heavily in those crucial improvements than mess around with changes that'll gain you only a percent or two.
Wasting your budget on overly precise measurements can easily sidetrack you from the important issues; for sure, you'd have less budget left over to work on them.
Maybe in 20 years, user interfaces will be good enough that our only remaining goal will be to fine-tune them for the last few percents' quality gain. That's definitely not the case today.