How Big is the Difference Between Websites?

by Jakob Nielsen on January 19, 2004

Summary: The average difference in measured usability between competing websites is 68%. This is smaller than expected, but makes sense given the dynamics of design within individual industries.


I recently read through a huge pile of usability reports to compute statistics about usability project outcomes for my course on usability in practice to help frame participants' expectations for the studies they'll run when they get back home. One of those statistics is likely of wider interest, even for readers who don't run usability projects.

Competitive testing is a special type of usability study that compares designs from multiple companies in the same industry. Across the studies I analyzed, the average difference in measured usability was 68% when comparing two competing companies.

(Because the raw data was in the form of ratios, I computed the average using the geometric mean. One data point, for example, might represent a comparison of two online brokers, where buying shares took users two minutes on one site, and three minutes on the other. The ratio between the two sites would be 3/2 = 1.50, corresponding to a difference of 50% in measured usability. Thus, when I say something like "A is 50% better than B" I really mean "the ratio between A and B is 1.50" but that's too clumsy to use in expository writing.)

How to Use Competitive Scores

If you test your own site against a major competitor's, you'll likely find a 68% difference in measured usability (this is the average, though actual scores vary considerably). What does that mean for your project?

If it turns out that you are 68% better than your competitor, break out a bottle of champagne and toast your Web team. Gloat. But don't slack. Unless your competition is utterly incompetent, they'll be running their own competitive studies and will shortly learn about their site's shortcomings and improve them. On the Web, all advantages are temporary, and you must keep innovating to stay ahead.

Half the time, of course, you'll get the sad message that you're 68% worse than your competitor. This verdict has one redeeming quality: it's very motivating data to show to your upper management. After seeing it, they're much more likely to support a major change in your Web design's direction, with more emphasis on usability and customer service.

(Unfortunately, intranets cannot directly collect these motivating statistics. As a surrogate, measure your own intranet and compare the results with other intranets' published usability measurements.)

Some competitive studies reveal measured usability differences of only a few percent. You might think that such studies are a waste, but they can still provide substantial value by offering insights into specific design elements. Even when two sites get the same overall score, there are usually big differences in their component scores. Some tasks might be much easier on your competitor's site, and those tasks would indicate areas where you can learn from the competition and improve your design.

Whether the measured difference is big or small, positive or negative, it's a statistic that's well worth tracking over time. Currently, few companies are mature enough in their strategic usability thinking to track these numbers, but one of the greatest benefits of metrics comes from analyzing long-term trends. The goal is to improve your meta-design — that is, the design of your design processes — so that your capability for producing good results follows an ever-increasing curve.

Why Competitive Differences Are Relatively Small

You might think that an average difference of 68% is impressive. It's certainly big enough to excite executives and make them pay attention to usability.

But a 68% average difference between competing sites is fairly small when you consider that the average improvement for redesigning a website for usability is 83%. Several explanations might account for this discrepancy.

A given industry sector typically has fairly established best practices for Web design that all the major players emulate. The best practices for e-commerce usability, for example, are well documented and widely known among e-commerce designers.

My data suggests that the variability between website sectors might be almost as large as the variability within a given sector. This data is not as firm as I would like, so I don't want to report specific statistics, but between-sector standard deviations seem to be about 4/5 of within-sector standard deviations. Thus, benchmarking two sites from the same sector often results in a smaller than expected difference in measured usability given usability's distribution across the Web as a whole. The reason for this is that a substantial part of the variability is eliminated as soon as you specify a type of website.

Typically, big e-commerce sites have pretty good usability because their managers obsess over the smallest usability metrics, knowing that any usability improvement translates directly into hundreds of thousands of dollars in increased sales. In contrast, big established companies and government agencies frequently have sites that completely ignore customer needs because they don't know how often their users leave in disgust.

There is something of a "rising tide lifts most boats" phenomenon for each website genre: If you're lucky enough to be in a usability-conscious sector, you can leverage much of what's known. You're also likely to get management support for implementing usability findings, rather than have you sweep them under the carpet and stick with glamour design.

Competitive studies tend to measure very narrow site niches: sites selling, say, re-grind fixtures for deep-hole drilling typically commission comparisons with the biggest seller of re-grind fixtures, not a study comparing themselves to sites that sell nanofibers. They also tend not to test against very small sites in the drilling industry, even though doing so might identify much bigger differences than a test of two similarly sized sites.

Within narrowly defined niches, usability enhancements percolate quickly. Indeed, one of the main reasons to run regular competitive studies is so that you won't be left behind by a competing site's improvements. The tendency to copy the best designs explains why two sites in the same narrow sector will have smaller differences in measured usability than one might expect.

Can Usability Offer Sustained Competitive Advantage?

When you run a thorough usability project in which you discover customers' needs and design your site accordingly, you can expect your site to improve by 83% on average. Your competitors, however, will quickly "borrow" your best ideas, reducing your lead to 68% on average.

So why should you invest in usability? Why not just copy the competition's usability results? Three reasons:

  • Unless you run your own competitive usability studies, you won't know which design ideas are worth implementing on your own site. An idea that sounds good in theory might not produce many benefits in practice. (Individual customization on intranet portals springs to mind; role-based personalization turned out to work better when we looked at a large set of portals.)
  • You can patent usability innovations to keep the competition from stealing them. Most Web projects are managed by marketing departments that have no experience with the patent system. Websites, however, are inventions and should be protected when you invest in developing something new. Talk to people in your legal department. They might know of a patent attorney who doesn't bite.
  • Finally, a 68% competitive advantage is still worth having. Much of usability's long-term value comes not from easily replicated design elements, but from integrating these elements into a unified user experience based upon an understanding of human behavior and what customers need when.

Share this article: Twitter | LinkedIn | Google+ | Email