The Macintosh was introduced January 24, 1984. In fact, the Mac was originally manufactured in the Fremont, California building that now houses Nielsen Norman Group.
The Mac didn't pioneer any individual user-interface innovation. Its most prominent feature, the mouse, had been invented by Doug Engelbart in 1968. That the mouse took 16 years to move from the lab to popular use is a striking example of how slowly things move in the tech business — particularly when it comes to getting diverging designs into widespread use.
(Admittedly, the original mouse was not especially appealing: As I have experienced first-hand, the initial model was a heavy brick with an awkward-to-push button.)
The Mac's graphical user interface — characterized by windows, icons, menus, and a user-controlled pointer (that is, WIMP) — was also not new.
Before the Mac, my GUI projects used a PERQ workstation from Three Rivers Computer Corporation. Among other things, we conducted user testing to find the best mental model for controlling the display when there was more information than a single screen could hold. Our findings? To view additional content in a long document, people think of a "down" operation, so a downward-pointing arrow is the best choice. This is unlikely to surprise today's scrollbar users, but because the screen image actually moves up when users scroll toward the end of a document, the study's outcome wasn't obvious in advance.
GUI guidelines are now well established, and modern application designers can simply follow existing best practices. But all these guidelines had to be discovered through early experiments with graphical interactions. This early UI research happened at PARC and other places; some of it even at Apple, in the Lisa project.
Going beyond such research, the Mac offered 3 breakthroughs:
The features were integrated: Users got them all in one package, rather than having to accumulate far-flung innovations. This was a case where the whole was much greater than the sum of its previously scattered parts.
The GUI was the platform's expected foundation, rather than an optional add-on. In fact, early Macs didn't even have cursor keys, so applications had to be mouse-driven — and a mouse shipped as standard with every Mac. Although users could buy mice for many other computers (Microsoft's mouse was launched the year before the Mac), most of their apps remained character-based for years because the GUI wasn't the expected UI and designers couldn't rely on users having a mouse.
It created a human-interface standard that independent software vendors had to follow in order to have their applications deemed "Mac-like." Because the resulting consistency reduced the learning burden for new applications, users were willing to buy more software. And indeed, Mac users purchased about two applications more per computer than DOS users did.
As is often the case, pure innovation was less important than making the new stuff work well.
Triumph or Defeat for Usability?
During its first decade, the Mac offered clearly superior usability compared to competing personal computer platforms (DOS, Windows, OS/2). Not until Windows 95 did the PC start to approximate Mac-level usability.
Despite this Mac advantage, PCs have sold vastly better in every single year since 1984, and the Mac has yet to exceed a single-digit market share.
The Mac's miserable marketplace performance seems to pose a strong argument against usability. Why bother, if it doesn't sell?
The counter-argument is that usability is the only reason Mac survived. Compared to the PC, it was much more expensive, had only a fraction of the specialized applications, and was cursed by Apple's business-hostile attitude.
So why would anyone pay more for less? Because Macs were easier to use.
Even so, the Mac's modest commercial success emphasizes the importance of the total user experience: The PC had more specialized applications and a broader support ecosystem. Price also matters tremendously. In the 1980s, Macs were more expensive (and thus sold less) because of their fancy high-resolution display.
Today, there's no conflict between cost and usability. For websites, it's often cheaper to design for higher usability, because it emphasizes simplicity and interaction standards over bloat and made-up menu items and dialogue elements. For software applications, it often costs the same amount to design something good or something bad. (And usability's ROI is high, especially for websites, which users simply leave if the UI is difficult.)
For sure, given usability research's laughably low cost relative to any serious development budget, there's no excuse not to find out what works for your customers. Once you know, it usually doesn't cost any more to implement usability findings than it does to invent something that won't work as well. And, because usability integrates just fine with Agile development methods, it won't delay your launch, either.
In 1995, Don Gentner and I developed the Anti-Mac user interface by reversing every one of the main Apple Human Interface Guidelines. Although we were both Mac fans, we didn't think its early '80s design would scale to meet the demands of the Internet era.
The basic Anti-Mac principles focus on:
The central role of language
A richer internal representation of objects
A more expressive interface
Certainly, with today's heavily search-dominant users, the central role of language has started to happen. Indeed, the mobile usability studies we're running right now seem to indicate even more search dominance when users access websites through their phones.
A more expressive interface is slowly evolving through the use of thumbnail miniatures in both Vista and OS X and the ribbon UI introduced by Office 2007 and used by several recent applications.
Shared control is a feature of many social networking sites, where a user's personalized page is constructed as a patchwork of other users' contributions. Still, we've yet to see the significant computer-driven-agent contributions that Anti-Mac envisioned.
Richer internal representations might be a dream of the semantic Web movement, but it still hasn't gained much traction in the real world. The same goes for expert users. In fact, the Web has strengthened the importance of the initial user experience, since most people visit any given Web page only once.
So far, the Mac is holding up better than the Anti-Mac, though I still think many of the Anti-Mac ideas will have their day.
(As an aside, the Anti-Mac project showcases a nice vision engineering technique: As a thought experiment, construct a system that does the opposite of the norm. For example, a newspaper site without news or an e-commerce site that doesn't ship any products.)
iPhone = the Mac of Mobile?
History is now repeating itself. Just as Apple popularized the GUI on the desktop through the Mac, it's popularizing the GUI on mobile devices through the iPhone.
The mouse let users own the cursor and thus gave them direct influence over the UI by acting as their personal on-screen representative. Similarly, the touch screen lets users directly manipulate UI elements on the mobile. Our current testing of how mobile users access websites shows how unpleasant it is for people to repeatedly press buttons to move around the screen. Featurephones — and even otherwise nice smartphones that are operated through buttons — offer an indirect user experience that feels less empowering than touchphones.
I just hope we won't repeat all of history: Let's not wait 11 years to embrace better usability for mobile the way we did for PCs. And you shouldn't just copy the Apple design's surface manifestation (the touch screen now, the mouse then). You should also offer:
a smooth GUI,
an integrated user experience (including a clipboard or other cut/copy/paste mechanism, which Apple paradoxically doesn't offer on the iPhone even though it was one of the Mac's most important features),
a platform that uses direct manipulation to give users ubiquitous control, and
compliance with usability guidelines.