Aspects of Design Quality

by Jakob Nielsen on November 3, 2008

Summary: Usability scores for 51 websites show some correlation between navigation, content, and feature quality, but no connections to other usability areas.


Some user interfaces are good , some are bad ; we all know that. But why do designs differ in usability?

The easy answer is that some design teams have good designers, listen to their usability specialists, and comply with documented guidelines; other teams have bad designers, either do no usability or ignore the findings, and prefer their own pet theories to established best practices.

But this easy answer just begs a second question: Why are some teams more focused on the quality of the user experience than others? Thanks to a new dataset I just collected, we can now analyze this question statistically.

It's extremely rare to have data about the usability outcomes of numerous design projects that are all focused on the same problem. Even when we do competitive usability research for our consulting clients, we test only 3–4 competing sites in the same industry because that's all we need to derive strategic usability recommendations. Testing additional competing sites would result in severely diminishing returns; it's better to spend that money testing additional iterations of the company's own design.

But we now have usability scores for 51 similar websites, thanks to a study sponsored by the Pew Charitable Trusts in which we evaluated the usability of the voter information websites from all 50 states and the District of Columbia.

Because voting laws differ between states, the sites are not literally identical. However, they are similar enough that it's fair to compare them. For example, states have different deadlines for requesting absentee ballots, but all states must inform their residents about absentee ballot rules — including the relevant deadlines — and offer voters a way to request the ballots.

Distribution of Usability Scores

The following histogram shows the distribution of usability scores for the 51 voter sites. The possible score range was from 0% to 100%, with higher numbers being better.

Note that a perfect score wouldn't indicate a site with perfect usability. A score of 100% would simply indicate that the site got full marks with respect to the current state of the art in all the usability aspects we evaluated. In actuality, the highest-scoring site in our study got a rating of only 77%, showing how far voter sites have to go relative to the best commercial websites. (E-commerce sites tend to have particularly good usability because they go out of business if people can't shop there. Although government sites also benefit from usability, it's rarely a matter of organizational survival.)

Histogram: The number of state voting sites scoring at different level of usability.
Histogram of usability scores for 51 voter information websites.

The histogram shows a fairly normal distribution of usability: most states have middling usability. There are a few states with decent usability, with 3 sites scoring above 70%. Sadly, there are more states with low usability, with 8 sites scoring below 40%. But at least there are no sites with horrendous usability. The lowest-scoring site came in at 29%, which — while truly bad — would still allow the most determined and skilled users to complete tasks on the site.

Core Usability Aspects

You'd think that if a design team were good at one aspect of usability, it would be good at all the other aspects as well. Not so.

Our data indicates that there's rarely a relationship between the quality levels achieved for different usability questions. Statistically, such relationships are determined as correlations, and many correlations are so close to zero as to be completely insignificant.

In other words, if a design team is good at one area of usability, its strength or weakness in another area is completely random. Maybe it's good; maybe it's not.

There are two exceptions to this finding:

  • There is a positive correlation of .54 between the quality of the navigation and information architecture and the quality of content usability.
  • There is a positive correlation of .40 between the quality of the navigation and information architecture and the quality of the site tools.

(As a reminder, correlations range from -1 to +1. Zero indicates no relationship between two variables. Positive correlations mean that the two variables move in tandem — the more so, the closer the correlation is to one. Conversely, negative correlations mean that the two variables move in opposite directions, so that one gets bigger as the other gets smaller.)

The following scatterplot shows the relationship between navigation quality and content quality:

Scatterplot: usability scores for navigation (on x-axis) vs. content (on y-axis). Each dot is one of the state voting sites.

In the above chart, each dot represents one website. (Because some sites got identical scores on the two usability aspects, those dots are plotted in the exact same spot and only one dot is visible. That's why the diagram looks like it has fewer than 51 dots. However, I computed the trendline from all 51 dots, visible or not.)

This correlation is highly significant at p <0.001. (The correlation between navigation and site tools is also significant at p <0.01. This second scatterplot is not shown.)

This finding indicates that there are core usability areas that sites do try to cover: how users get around , the content they'll find, and the features offered. All very good; these are indeed important issues. Some teams have a clue about these core areas, whereas others seem to be throwing darts at Dreamweaver, constructing randomly organized websites with poor information and useless features.

Still, even the correlations between the core usability aspects aren't that strong: correlations around .5 indicate that only 1/4 of the variability in one aspect is explained by another, leaving 3/4 to be determined by chance. So being good at one thing still leaves the majority of the other core usability issues uncovered.

Neglected Usability Aspects

According to our statistical analysis, many usability aspects are completely disconnected from the usability core. These aspects include:

  • Homepage usability
  • Search
  • Accessibility
  • Web presence (that is, how users get to content from outside the site, or "usability-in-the-large")

The correlations between these important quality aspects are low (and sometimes even negative), and they're not highly correlated with the core areas either.

For example, there's a negative correlation of r =-.1 between homepage usability and accessibility.

This negative correlation certainly doesn't mean that accessibility stands in opposition to a good homepage design. Remember: We evaluated the homepages' usability, not whether they had a particularly glamorous appearance or featured intricate Flash animations. (The latter usually reduce both usability and accessibility, because sites typically use them wrong.)

Rather, the negative correlation indicates that designers aren't treating accessibility as a component of user experience quality. Most likely, government agencies are focused on complying with legalistic accessibility regulations instead of trying to make the sites easy for people with disabilities to use.

My interpretation of the low or negative correlations is that many important usability areas are inadequately prioritized in design projects. Whether a site ends up good or bad for areas beyond the usability core is therefore random.

By "random," I don't mean that the design is determined by a coin toss. Rather, I mean that it's a coincidence when a design team includes a person who happens to know the key guidelines for delivering a quality user experience in areas outside the bounds of core usability.

A site that does well on the basics might therefore completely neglect many other aspects of Web usability. And, conversely, a site that's horrible on the basics might be lucky enough to have a designer who happens to understand another usability aspect and succeed in boosting the site in that one area.

The following histogram of homepage quality across the 51 websites shows a much broader distribution than we saw for overall usability. Some sites have completely miserable homepages, whereas others are close to achieving all of the current best practices.

Histogram: The number of state voting sites at different level of scores for homepage usability.
Histogram of homepage scores for 51 voter information websites.

As the correlations show, doing well or poorly on this score is disconnected from a site's quality in other ways. This is why overall usability ratings are typically middle-of-the-road: They comprise great scores and horrible scores across the various usability aspects.

So, most sites have something they do very well and some area where they totally let users down. Objectivity, this often averages out to a mid-level usability performance. For users, however, the mix of good and bad design on the same site feels sloppy and like the site doesn't try hard enough to serve them.

Good User Experience Requires an Integrated View

There's a reason that we have a "total user experience" concept to encompass everything that users encounter. It's not enough to have a great design for part of the user interface. Good navigation, say, is certainly a necessity for a great user experience, but it's not sufficient. Offer a bad homepage, and users might turn away before they even start navigating.

We can liken a website's user experience to the metaphorical chain that's no stronger than its weakest link. If any one usability attribute fails, the overall user experience is compromised and many users will fail.

How can you ensure the quality of the total user experience? You need an integrated view of usability, from the user's perspective, and you need to develop the site through user-centered design.

We often have prospective clients call us up to request help with one element of their site or intranet. For example, they often want to improve their IA or navigation . This is indeed a worthy cause, and would help most sites. But is it the weakness causing the most lost business value ? Maybe. Or maybe not. The only way to find out is to start from first principles and assess the overall user experience.

Don't overly focus on one aspect of user experience, no matter how important. Take the integrated view rather than let key aspects drift in the wind.


Share this article: Twitter | LinkedIn | Google+ | Email