The 3Cs of Critical Web Use: Collect, Compare, Choose

by Jakob Nielsen on April 15, 2001

Summary: According to a recent critical incident analysis, users' most important Web tasks involve collecting and comparing multiple pieces of information, usually so they can make a choice.


Traditionally, critical incident analysis has been a great tool for collecting user feedback about existing user interfaces. To do it, you basically ask the user to recall a prominent case where the interface was uncommonly helpful or particularly disappointing. I usually ask users for both positive and negative examples, and the responses always help me understand how they're using the system and how I can improve it by making certain aspects more or less prominent.

Unfortunately, critical incident analysis is less useful for many Web projects for two main reasons:

  • The site or new feature may not exist yet, so users have no real-life experience using it.
  • Websites often fail in ways that are critical to the company but not to users, who simply leave and go to another site. Users rarely recall why they left a site after a minute or two. Abandoning a shopping cart because you can't find the shipping costs is not an incident that burns itself into your memory cells. This is why there is little value to those surveys on "why users abandoned their shopping carts" that some analysts publish.

Xerox PARC Study of Critical Incidents on the Web

Researchers from Xerox PARC recently presented the mother of all critical incident studies. The big question: What are the important things people do on the Web as a whole? Although individual websites may not generate good critical incidents, the totality of users' online experience surely does.

Julie Morrison, Peter Pirolli, and Stuart Card collected responses to the following statement from 2,188 people:

Please try to recall a recent instance where you found important information on the World Wide Web, information that led to a significant action or decision.

The obvious weakness of this request (and the entire critical incidence method) is that it does not address average Web use; it looks only at important use. For example, only 2% of the respondents referred to reading news when describing a critical incidence, whereas a separate survey of what these same users do on the Web found that 24% of them read news regularly.

However, we can turn this bug into a feature. Looking at what users find important on the Web provides several advantages:

  • Critical tasks are more likely than average tasks to lead to value-added services that users will pay for.
  • If you support important tasks, users are likely to turn to you for everyday tasks.
  • By understanding what's critical to users, you might gain insight into what's different and exciting about the Web; this can inspire you to innovate.

Main Method: Goal-Driven Collection

The PARC researchers analyzed the methods users described for arriving at the information they needed for their critical tasks.

  • Collect: 71%. Users searching for multiple pieces of information. They are driven by a specific goal, but are not looking for one particular answer.
  • Find: 25%. Users searching for something specific.
  • Explore: 2%. Users looking around without a specific goal.
  • Monitor: 2%. Users repeatedly visiting the same website to update information. Visits are triggered by routine behavior rather than a particular goal.

The most obvious conclusion is that, when it comes to critical Web use, users are almost always goal-driven: 96% of the time in the PARC study . Although this has been common knowledge for some time, the magnitude of the percentage surprised even me.

It's also interesting that it is almost three times as important for users to find multiple pieces of information as it is to locate a single specific piece. The entire browsing paradigm is optimized for accessing individual locations. Users are typically on their own when they want to collect more than one answer.

Main Task: Compare and Choose

In the study, the primary reasons for the respondents' important use of the Web was classified as:

  • Compare/Choose: 51%. Evaluate multiple products or answers to make a decision.
  • Acquire: 25%. Get a fact, get a document, find out about a product, download something. (Note: Morrison et al. use the term "find" to refer to these tasks, but I prefer the term "acquire" to differentiate the goal from the method, as discussed above.)
  • Understand: 24%. Gain understanding of some topic; this generally includes locating facts or documents.

The important tasks are thus divided almost equally between cases where the user is trying to decide between multiple options and cases where the user is pursuing a single option.

Implication for Usability: 3C Testing

The three Cs, collect, compare, and choose, describe most of the Web's critical use. As a result, we should make sure to include test tasks that address these issues when we plan usability studies of websites.

Of course, usability studies should also test simpler tasks. We should not overlook the less-critical aspects of using the Web, since they account for more of users' time. But, considering how poorly the Web currently supports the 3 Cs, we do need to give them more focus so we help users better succeed with their most important tasks.

Reference

Morrison, J.B., Pirolli, P., and Card, S.K. (2001): "A Taxonomic Analysis of What World Wide Web Activities Significantly Impact People's Decisions and Actions." Interactive poster, presented at the Association for Computing Machinery's Conference on Human Factors in Computing Systems, Seattle, March 31 - April 5, 2001. (Warning: link leads to a PDF file.)

 


Share this article: Twitter | LinkedIn | Google+ | Email