Test users had high-speed connections to the Internet: typically T-1 speed at around 1.5 Mbps. Such fast connections are not typical for home users. Even business users usually share bandwidth with others and get effective speeds of much less than T-1.
Thus, for studies intended to guide current Web design, it is recommended to have the users suffer at lower speeds in order to generate more realistic results.
For studies intended to explore future directions and probe what one might want to do in five to ten years, it is obviously recommended to use high speeds, as was done in the Poynter study. The same would be true for studies of high-end users, even though it is wise to remember that many high-end users often access sites through their laptops from hotel rooms during business trips: in many places you are lucky if you get 28.8 speeds under these conditions.
As it turns out, one of the main findings from the study was that users look at text first and partly ignore the graphics on Web pages. Under more realistic usage conditions, this finding might have been even stronger, since the graphics would have been slower to appear. Given the fast speeds in the study, this finding serves as a good counter-argument to those clueless "analysts" who claim that we just need to wait for faster bandwidth and the Web will turn into television.
- Subjects were recruited through promo pieces in newspaper websites ( Chicago Sun-Times and St. Petersburg Times )
- Reading online news at least three times a week was the chief criterion in selecting subjects from those who responded to the promos
With this approach, it is no wonder that the test users were interested in reading online newspapers.
This selection bias makes it impossible to generalize the study's findings that users spent an average of 34 minutes in each news-reading session and that they spent most of that time at traditional newspaper sites. People who prefer non-traditional news sites would not be likely to have been recruited for the study. And people who spend very little time reading news would also be unlikely to have stumbled across the announcement and become respondents.
In a rather surprising finding, users looked at 45% of the banner ads, even though they only spent an average of one second per ad doing so.
Banner blindness has been documented in many studies , including several other eyetracking studies in both the United States and Europe. And most credible of all evidence is the dramatic decline in click-through rates every year for the last five years - from about 2% in 1995 to 0.2% now: the accumulated behavior of hundreds of million of Web users clearly indicates that they ignore advertising on the Web.
Why did the current study contradict common sense and most previous studies? Some possible reasons:
- Test subjects were recruited through notices on newspaper sites, so they may have been a self-selected sample of those few people who do look at banners since they are probably more likely to notice a small promo and react on it.
- The test task may have predisposed users to look around the Web pages more than they normally would (even when told to "behave normally", people want to be careful when they are being tested and have a camera strapped to their forehead).
- The ads were displayed to the users in the first place since the test computers did not have advertising-blocking software like WebWasher installed. There are still only a few million users of these new products, so it is currently the correct decision to run Web user test with standard browsers that do show ads.
- One second may just be enough to recognize something as an ad and decide to ignore it. For sure, it is not enough to follow along with the incessant animations that are common in banners and which take longer to cycle through the entire message.
Update: Correcting Study Bias
One great advantage of doing newer research is the ability to avoid the mistakes of previous rounds. We aimed at strong real-world validity in our new eyetracking research . Current eyetracking software makes it tempting to conduct less-valid studies where users are restrained from browsing freely, but we didn't succumb to the temptation of running a bad study, simply because it's easier to do so.
In one aspect we did make the same "mistake" as the Poynter study: we also used a high-speed Internet connection, so we didn't get to test the dial-up user experience. (Of course we have done so in many other studies.) Doing eyetracking solely of high-speed users is not a big problem these days, however, since most users have such connections in the United States and other rich countries.