Details in Study Methodology Can Give Misleading Results

by Jakob Nielsen on February 21, 1999

A recent study by IPSOS-ASI purported to show that Web advertising was as good for branding as television advertising : users remember 40% of Web banners and 41% of TV commercials. Because of this striking finding, the study was widely reported in the press .

The finding was indeed so striking as to conflict with most other evidence:

  • qualitative observations of Web users show that they are very goal-driven and ignore the ads while focusing completely on their task
  • eye-tracking studies by Will Schroeder and others quantitatively confirm the existence of banner blindness where the user's gaze never rests in the region of the screen occupied by advertising
  • click-through rates are dropping like a stone and recently reached 0.5% (half of the 1% rate a year ago), causing savvy advertisers like P&G to pay no more than 1/7 the normal cost of banners

When new data contradicts existing evidence and established interaction theory, the first reaction should be to suspect the new data ; not to overthrow the old insights . Sure, sometimes an Einstein will have discovered a Newtonian paradigm that doesn't hold, but most paradox cases are cold fusion.

In this particular case, a close inspection of the research methodology shows a small, but important aspect of the way users were treated that makes the results unrealistic as a prediction of real-world Web usage .

The television viewers were watching a show that they had been asked to view rather than one they had selected themselves. This is a slightly unnatural situation, but probably one that doesn't much impact the extent to which they pay attention to the commercials.

In an attempt to use a similar approach to studying the online users, they were asked to go to a section of America Online "to evaluate the content of that area." Evaluating something is a very different user experience than using something for a self-imposed goal (the normal way people use the Web). When asked to evaluate some pages that they have no reason to use, people are likely to look around each page and check out every design element on the page. In contrast, when people actually use the Web, they go straight for the most likely solution to their problem and ignore every other part of the page: the second users see a link that leads to their goal, they click it and are off the page. (As an aside, this phenomenon also explains why it is invalid to measure usability by asking a survey panel to check out a site and rate it on a questionnaire.)

A smaller difference between the measures of TV and online advertising is that the TV viewers were responding to verbal questions read over the telephone whereas the computer users were responding to a visual questionnaire on their screen. It is possible that the visual representation of the questions was better at triggering the respondents' memory than the auditory questions. I am not sure how much this latter issue influences the outcome of the study, but the first issue is sufficient to make the conclusions irrelevant for anybody interested in real Web users.

It is admittedly hard to design a perfect study to compare TV and online because user behavior is so different in the two media . In the study I am discussing here, the goal was to treat people identically, but that's exactly why the outcome is unrealistic for the Web. As an analogy, think of comparing bicycles with cars by asking riders and drivers to move with the same speed. You would find that car owners do not use their vehicles to travel very far.

A better study would be more naturalistic to allow for the differences in user behavior: have people watch TV for an hour (selecting their own preferred show) and see how many commercials they remember, and have users browse the Web for an hour (while performing real tasks like booking airline tickets, researching what scanner to buy, or tracking down the address of a long-lost friend) and see what banners they remember. Even this study is not fully realistic because people often use the Web for very quick in-and-out access to specific data.

This case study shows the importance of meticulous attention to detail when planning quantitative studies. The smallest problem in the methodology can significantly impact the outcome and give you results that are irrelevant for the real-world problem you are trying to solve.

Luckily most Web usability studies are more robust since they don't involve numeric comparisons between different concepts. The most common Web study looks at the way your customers use your site and where they have difficulties. As long as you have representative users and don't bias their actions, you will discover the major usability problems in your design. So don't despair: most real projects will have considerably less weaknesses than the study I criticized here.

 

See comments by Marianne Foley (from the organization that ran the study)


Share this article: Twitter | LinkedIn | Google+ | Email