It may still be possible to buy the proceedings online Monterey, CS, May 3-7, 1992
About 2,600 software and user interface professionals attended the ACM CHI'92 conference (computer-human interaction) in Monterey, CA, 3-7 May 1992, making it the largest user interface conference ever. A majority of the attendees were practitioners, though most researchers in the field were also there. The first CHI conference was in 1983 as user interfaces were beginning to become more than an afterthought in system design. Since 1985, the CHI conferences have been annual, and they are considered the major event in the user interface field.
The CHI conferences are always an incredibly intense experience, and I put in about 80 hours during the five days of the conference. Luckily, most of these hours were extremely fun. Even so, it is impossible to report on all CHI events in a short article, especially since there were about twelve parallel events going on at any given time, making it physically impossible to experience the entire conference no matter how much time one spent.
This year, my impressions from the CHI'92 conference can be summarized under the slogans "inventing the future" and "moving the interface off the flat screen." One of the many information technology companies advertising user interface job openings at the conference even did so under a heading stating that the best way to predict the future would be to invent it by coming to work for them. Actually, the job bulletin boards formed the basis for an interesting observation. Shortly before the conference, Sun Microsystems had headhunted most of a highly respected user interface group from Apple Computer. And both companies advertised for additional usability staff at the conference. Furthermore, several other vendors of advanced or futuristic computers (like 3-D graphics workstations and pen computing) were recruiting heavily. This observation leads me to conclude that the days are over in which advanced computers can be sold simply on the basis of having more cycles, and most of these companies went without separate user interface staff. Instead, the future of computing will be defined by improved usability, as both hardware and most basic software functionality become mere commodities. Of course, cynics might add that elaborate user interfaces are good for some vendors exactly because they soak up cycles. But I prefer to believe that all these new user interface groups will result in increased usability in the years to come.
Other indications that user interface people are inventing the future of computing came from the heavy presence of art, entertainment, and consumer products at the conference. Certainly, a major futuristic element was a panel of science fiction authors discussing the potential future of computing as represented by their novels. The panel was actually somewhat disappointing in that most of the panelists seemed to be better as writers than they were as speakers. Large parts of the discussion centered around the potential benefits and problems of having a direct brain-to-computer link. Even though this might be seen as the ultimate user interface, Rudy Rucker was hesitant to let a "crazed hacker" drill a hole in his scull, given the consequences of a system crash as long as the brain could not accept a cold reboot. The moderator, Aaron Marcus, made a good point in his opening statement by describing science fiction stories as prototypes of advanced user interfaces mocked up using the simple virtual worlds technology of words on paper. One such important issue implicit in Vernor Vinge 's panel statement was the problem of dealing with an ever-accelerating pace of technological change, where user skills may become obsolete almost as soon as they are learned.
Another heavy dose of art was found in the interactive experience part of the conference, where attendees could get hands-on experience with far-out interfaces. Some were almost too far out, like the "It's a Scream" interface, which users controlled by screaming at the top of their lungs. The louder you screamed, the more the system responded. This is obviously an overly simplistic, one-dimensional interaction technique, but from an artistic perspective, it did result in giving the users a strong bodily feeling of being physically engaged in the interface, thus increasing the sense of personal involvement. Other interactive experiences involved three-dimensional interfaces: One system scanned the user's head and generated a 3-D graphics model that could be manipulated on the screen. The designers had the audacity to ask the users to sign copyright releases allowing potential future use of their heads, but there were still long lines of people wanting to try the experience. Another system coupled "traditional" virtual reality eye-goggles with virtual acoustics played through headphones. The 3-D sound effects allowed users to locate objects in space even when they were behind them.
The main entertainment element of the conference was the interactive performance, where dancers, musicians, a storyteller, and a clown performed while using computers. The three dancers illustrate a trend also in mainstream interfaces towards disconnecting the user interface from the flat screen and having it relate to the physical world. All three dancers had computers generate music as a result of their movements, which thus served as "input devices" to the system. Leslie-Ann Coles was observed by a video camera doing gesture recognition, Chris Van Raalte was wired with electrodes on his skin to sense muscle contractions, and Derique McGee wore a data-suit that could sense when he slapped his body. The most impressive moment of the performance was perhaps the contrast between the quiet gesture of Coles blowing a lock of hair away from her forehead, and the almost aggressive music generated by that movement. The CHI interactive performance probably represents the first time a computer conference got reviewed by the theater critic of a major newspaper (the San Francisco Examiner).
A further example of the movement of the user interface away from the screen was a paper on the digital desk presented by William Newman and Pierre Wellner. The user's regular desk is observed by the computer through a camera mounted in the ceiling as the user works with regular pieces of paper. When the user gestures to characters on some paper in a special way, the computer performs optical character recognition on the camera image of the paper and acts on the information. Output can be displayed on the same paper from a projector mounted next to the camera. One example interaction had the user gesture at a column of numbers in an expense report to have the system calculate the sum and project the result at the bottom of the column. As another example, a user reading a foreign language text could point to a word and get the dictionary definition displayed, thus making any printed text into a kind of hypertext.
Other systems kept the screen but changed the feeling of the interaction to that of everyday physical objects. Hiroshi Ishii and Minoru Kobayashi presented the ClearBoard where two users can collaborate through screens displaying overlaid images of computer-generated graphics, drawings made by either user, and a video image of the other user. In contrast to other collaborative systems where the remote user is shown in a separate video window, eye contact is not lost with the ClearBoard, and a user can even recognize what the other user is looking at on the screen during a conversation. Extending the trend towards pen-based computing, and also moving the computer away from the traditional desk-bound screen, Scott Elrod and eleven co-authors presented the Liveboard, which is a computerized whiteboard with a back-projected large screen. Users could write on the Liveboard with markers that could also be used for popping up menus and making selections from projected text and graphics.
At first sight, some of these systems, and especially the interfaces for the clown and the dancers, may seem irrelevant for practical applications of computers. I still believe that they indicate important trends in user interfaces that will have major economic pay-offs in the entertainment and home computing sectors. Even more traditional computer uses may evolve to include some of these novel interface techniques. The conference did present several examples of bridging interface designs that included some elements of the avant garde interfaces and also had fairly immediate applicability. For example, Will Hill and Jim Hollan showed a video of the use of innovative pointing techniques in the visualization of telephone company data. During a very complex animation of a large dataset, a conceptually simple technique of overlaying the data with a semi-transparent plane caused datapoints that protruded from the plane to attract attention, allowing users to investigate the events causing the exceptional data. Traditional filtering would have been an easy way to show only the exceptional data, but by keeping the total dataset visible in muted colors behind the semi-transparent plane, the users could better understand the exceptional events in context of the other events and an animation of changes in the total underlying system.
Another example of a concept bridging advanced and current interface techniques was a paper on the pile metaphor presented by Richard Mander, Gitta Salomon, and Yin Yin Wong. The basic idea was to replace the orderly file folders in current graphical user interfaces with something more closely reflecting the way many people actually deal with paper in their office. Given the way my desk looks right now, I certainly sympathize with the idea of allowing users to collect documents in piles rather than file folders. In their prototype interface, Mander and colleagues represented documents by miniatures of their first page rather than by standardized icons, and users could gather many such miniatures in a pile. Users could access the contents of a pile by several methods, including edge browsing (looking at the edges of each miniature document when they were piled up), temporarily spreading out the pile on the screen to make each miniature fully visible, and finally selecting a single miniature for a closer look. These browsing mechanisms were designed after a study of how people use piles of documents in their physical offices, so the paper was also an example of the use of appropriate field studies of previous technology.
A panel organized by Bob Mulligan presented two very different hypothetical development projects (space station software and a project planner) and asked several usability engineering specialists what they would do about them. Even though the panelists (Mary Dieli, Dan Rosenberg, Carrie Rudman, and myself, as well as Steve Poltrock as the discussant) had different backgrounds and came from quite different companies, they agreed on most of the fundamental issues for the two projects. This basic agreement may reflect a maturing of usability engineering, even though there are still many details that remain to be resolved, and where the panelists disagreed. One of the interesting statements made by one panelist was that the usability budget basically did not matter for a personal computer product like the project planner, since it was dwarfed by the advertising budgets in any case. What mattered more was for a software supplier to maintain a reputation for delivering high-quality products.
When writing a conference review, it is tempting to single out specific papers as being the best of the conference. Unfortunately, due to the many parallel events, I attended too few of the papers for this to be really fair. As a general observation, the quantity and especially the quality of Japanese contributions was up significantly, indicating the increased emphasis on user interfaces in that country.
Two papers should be mentioned for providing useful results to back up folklore in the field. Brad Myers and Mary Beth Rosson had surveyed 74 software projects with a mean code length of 132,000 lines, and found that an average of 48% of the code was devoted to the user interface. With respect to implementation time, 56% of the time was spent on the user interface for applications using a bitmapped display, whereas character displays only required 33% of the implementation time. This difference again indicates how the value added in newer software is increasingly in the user interface. It also demonstrates the need for better software tools for the implementation of graphical user interfaces, and there were several papers proposing such tools at the conference.
The other paper investigating usability folklore was by Erik Nilsen, HeeSen Jong, Judith Olson, and Peter Polson. They had studied the design issue of whether to provide multiple interaction methods for achieving the same goal. Some interface designers believe in the "less is more" philosophy and prefer simple interfaces to avoid confusing the user. Others believe that more features and more user choice lead to more powerful interfaces. The study mostly supported the first school of thought, and showed that even though a large number of alternative interaction techniques can provide optimized methods for special cases, the users spend so much time choosing between the methods that the total time to perform a task ends up being longer. Also, users do not always choose the optimal method for their specific circumstances. As a rough guideline, Nilsen and colleagues recommend that designers only add alternative interaction techniques if they can save at least four keystrokes (or similar interaction units in graphical interfaces), since just having to decide between alternatives can cost the user as much as a second or more. They also provide a more detailed cost-benefit model for calculating such trade-offs.