This essay was written February 24, 2007.
A poorly designed ballot hit Florida again in the 2006 elections, this time in Sarasota County. About 13% of voters failed to cast their vote for the House of Representatives , even though they voted for the Senate. Voting was done on a touch-screen computer, where users had to make their way through 21 screens to cast all their votes for various offices.
The first screen listed all the candidates for the Senate, and required one touch before the user proceeded to the next page.
Most of the second screen was taken up by the list of candidates for Governor, and it was clear enough for users how to vote for their preferred candidate and proceed to the third page.
The problem was that the two candidates for the House were also listed on this second page, in a tiny sliver on the top of the page, above a headline for the state elections that was highlighted in color. 13% of users overlooked this top area and only looked at the main body of the page.
Anybody who has ever worked in Web usability knows what caused this problem: banner blindness . People have a tendency to never look at a slim rectangular area that's above the page's main headline. Banner blindness has been documented since 1997 and has been confirmed in recent eyetracking studies .
A second usability issue contributed to the problem, though not as strongly: consistency . The first page trained users to expect to cast one vote per screen, and so that's what they were inclined to do on the second screen as well.
When you superimpose the two first pages of the ballot, it's clear that the area people looked at to cast their vote for senator was almost the same as the area they looked at to cast their vote for governor.
If the first page had contained two choices, people would probably have looked more carefully around the second page. But now, 13% looked only at the main part of the page and ignored the small bit at the top.
In fact, it's surprising that only 13% of voters ignored the House election. Banner blindness usually impacts many more people. But of course this was not a Web page, and the "banner" was textual and not a flashing graphic.
The New York Times quoted a Sarasota resident for being "insulted" by the implication that they are "too stupid to know how to vote." Of course, as with any usability problem, the issue is not stupid users. The problem is that the designer was stupid and violated two well-known usability principles.
In fact, tech-savvy voters are more likely to be hit by banner blindness than people who never use the Internet. Less-skilled users would likely move slowly and hesitantly through the screens and would thus be more likely to spot the area at the top of the second page. So clever voters (or at least computer-using voters) were more likely to have this problem than stupid voters.
A third usability issue impacted the outcome as well: the fact that users had to make their way through 21 screens to cast the full ballot. By the end of such a long interaction sequence, people can't remember what they did in the beginning. This is why people didn't notice that they had not voted for the House.
Also, having to work your way through 21 pages (as nicely indicated by appropriate use of a progress indicator of the bottom of the screen) means that you are likely to rush through each page a bit faster than maybe you ought to.
Sadly, long ballots with many choices are an issue beyond the power of the interaction designer. But the bigger the ballot, the more the reason to design its components for optimal usability.
There are plenty of other usability problems with these screen designs. For example, the checkboxes are too far away from the names of the candidates. Also, there is no reason to abbreviate the party names and write them in ALL CAPS. Finally, of course, the visual design leaves much to be desired -- software shouldn't look like DOS these days. (See our two-day course on usability guidelines for application design for more about these types of issues.)
Florida also had a usability problem that impacted a major election in 2000: the infamous " butterfly ballot ."
You would imagine that Florida would recognize by now the need for usability when designing these walk-up-and-use user interfaces. But no. The 8-person committee of academics the Secretary of State hired to investigate the 2006 ballot didn't contain a single expert on human-computer interaction.
This committee went to great depth investigating the possibility of voting fraud and software manipulation. But there's no reason to suspect something this complicated when a simple explanation is readily available to anybody who understands human behavior in interactive systems.