CHI'90 Trip Report

by Jakob Nielsen on June 1, 1990

The conference proceedings can be bought online Seattle, WA, 1-5 April 1990.
 

HCI as a Profession

CHI'90 was the largest CHI-conference yet with 2,300 participants: up 39% from 1,650 the year before. As a matter of fact, the growth of human-computer interaction as a profession was an important theme in itself this year. SIGCHI has been the fastest growing ACM special interest group for several years and continues to retain this distinction.

Traditionally, HCI specialists have graduated with "regular" computer science or psychology degrees and have acquired the necessary HCI-specific skills gradually on the job. Now, several universities have established special HCI programs aiming at our new profession. For example, I visited the University of Toronto after CHI and found that the computer science department had just had a B.S. HCI degree approved. As another example, the Scottish HCI Centre at Heriott-Watt University in Edinburgh initiates a M.S. program in HCI on 1st October 1990. Also, Michael Dertouzos from MIT took advantage of his keynote talk (see below) to announce that MIT has endowed a user interface professorship with two million dollars from the X Windows consortium. This development of HCI from a job performed by graduates of other fields to an independent profession is historically very similar to the development of computer science as a discipline. Computer science also started as an offshot of mathematics, electrical engineering, physics, and similar disciplines.

Terry Winograd from Stanford University gave an invited talk discussing how to teach human-computer interaction. This should not just include the user interface itself but also the work structure and how it is changed by the use of computers. Winograd saw HCI as a design discipline where the focus is on what we can build. This is in contrast to traditional engineering where one more or less knows what one wants to build (say, a bridge), and the focus is on how to build it using underlying systematic principles. Therefore, we should teach HCI in similar ways to those used in other design disciplines. Winograd's new course is based on having small groups of students actually design prototype interfaces that serve the same purpose as studio models serve for architects.

As another indication of the growth of HCI in relation to its constituent disciplines of computer science, psychology, and human factors was that the Ergonomics Society has decided to allow their members to choose between the traditional Ergonomics journal and the HCI-oriented Behaviour and Information Technology journal as their membership journal. This was announced at an interesting panel on HCI journals chaired by Jack Carroll.; from the IBM Watson Center. The panel showed that we now have five HCI journals (Behaviour and Information Technology, the Human-Computer Interaction journal, Interacting with Computers (the British Computer Society's journal), the International Journal of Human-Computer Interaction, and the International Journal of Man-Machine Studies) plus even a secondary publication with abstracts from the other journals. The older journals typically have slightly more than a thousand subscribers whereas the newer journals only have a few hundred at the moment. Because of these low subscription figures, it seems that the CHI conference proceedings (distributed to about 6,000 people) have taken over the role of archival publication normally served by journals in other professions. As an example, I have started seeing newsnet messages of the type "please give me some good references about such-and-such HCI topic" where the posters state that they have already checked the collected CHI proceedings themselves.

As a further example of the change of publishing focus from the original disciplines to the new one, the softcover edition of Don Norman's entertaining book The Psychology of Everyday Things has been retitled The Design of Everyday Things, even though the new acronym is less catchy than POET. The rise of user interface design as a new discipline is seen by some as reducing the "purity" of the work presented at the CHI conference, and some grumbling could be heard from traditionalist psychology and computer science proponents.

My final example of the view of HCI as a separate profession comes from the panel on technology transfer chaired by Keith Butler from Boeing. Chuck Price from the Boeing Computer Services described his need as a line manager to know what skills to look for when hiring HCI staff and complained that these skills had not been defined well enough. He came to the CHI conference to get some skills and methods to use in his project, and he emphasized that his focus was on the development process and people development rather than on necessarily getting the latest widget. He felt the need to have people trained in the multiple necessary HCI skills as a single discipline rather than having to put together teams of many diverse specialists.

The panel also discussed the bottom line impact of using usability engineering. John Thomas from NYNEX mentioned one case where they shaved a few seconds off the interaction time for a certain transaction. Because of the huge number of users of their product, this productivity improvement can be shown to translate into hundreds of millions of dollars, but unfortunately that money is never collected in one place and seen by management. So it is hard to really prove that the use of our methods is actually worth something to the company. Also, their users (telephone operators and residential customers) are very different from the college sophomores used in most usability research, so he was not certain that the external validity was good enough to transfer the state of the art in user interface design with no changes from the academic environment to NYNEX.

The other side of technology transfer is to transfer practical considerations and experiences to the academic research environment. Doing so can be just as hard as the transfer of research results to development projects. For example, David Kieras from the University of Michigan was "aching" for access to a real design process under appropriate conditions. Mostly, development projects are so focused on shipping their product that they have no time to collaborate with researchers wanting to study them. John Bennett.; from IBM pointed out that very difference in time scales between the two environments is a major inhibitor for collaboration. The academic focus is on writing papers that may be published years later and the time scale is often that of a Ph.D. thesis project. Even so, the panel did provide hope for technology transfer enthusiasts: Kieras emphasized the thrill he gets every time somebody calls him up and wants to use some elements in one of his papers. And Price's company was willing to commit some resources to long-range projects with some risk.

Non-Command-Based Interaction Paradigms

In spite of Chuck Price's lack of interest in the latest widgets, it still seems to me that this year's conference theme was symbolized by a collection of somewhat peculiar widgets. For the first time in several years, the CHI conference was permeated with breakthroughs in the underlying technology, and I felt that the main conference theme was the emergence of the next-generation interaction paradigm. And this was about time too, given that the last several conferences had been mostly dominated by current-generation WIMP interfaces. (WIMP = Windows, Icons, Menus, and a Pointing device. The acronym was probably invented by a real hacker who does not eat quiche.)

Anyway, the fifth generation user interface paradigm seems to be centered around non-command based dialogues. This term is a somewhat negative way of characterizing a new forms of interaction but so far, the unifying concept does seem to be exactly the abandonment of the principle underlying all earlier interaction paradigms: That a dialogue has to be controlled by specific and precise commands issued by the user and processed and replied to by the computer. The new interfaces are often not even dialogues in the traditional meaning of the word, even though they obviously can be analyzed as having some dialogue content at some level since they do involve the exchange of information between a user and a computer.

The principles shown at CHI'90 which I am summarizing as being based on non-command-based interaction paradigms are eye tracking interfaces, artificial realities, play-along music accompaniment, and agents.

Eye Tracking

Eye tracking has long been an esoteric and very expensive technique but this year, a quite practical system was part of the conference demos. I tried it for regular eye tracking applications such as "typing" by looking at a picture of a keyboard, but I have to admit that I did not find it very pleasant to type in this way (even though the technique is of course great for many handicapped users). A more convincing demonstration of eye tracking was a paddle-ball video game where I controlled the paddle by eye tracking. Therefore, I could just look at the screen where the ball was going to hit, and presto!-the paddle was right under the ball. I am normally not all that good at video games but I kept this paddleball game going for a long time. I would call this a non-command-based interface because I was not consciously controlling the paddle; I was looking at the ball and the paddle automatically did what I wanted it to do. In contrast, even a direct manipulation video game would involve some kind of command to move the paddle as such (for example by moving the mouse). The difference is one of the level of the dialogue: In the eye tracking paddleball you look at the ball and the paddle keeps up by itself, whereas you have to tell the computer to move the paddle left and right in the direct manipulation paddleball game. Therefore, your focus of attention remains on a higher level in the eye tracking version of the game.

Possibly the most interesting paper presentation I attended at CHI'90 was given by Robert Jacob from the Naval Research Laboratory. He was developing special interaction techniques for the new input medium provided by eye tracking but mostly did not want to use eye tracking as the only or main input device. Users do not have full control over their eye movements and the eyes "run all the time"-even when you do not intend to have the computer do anything. Since it is impossible to distinguish times when users mean something by a look from times where they are just looking around or are resting their gaze, Jacob needed to develop suitable interaction techniques for special cases only. For example, it proved possible to move an icon on the screen by selecting it by looking at it and pressing a selection button (to prevent accidental selection) and then looking where it had to go.

One interesting application was a naval display of ships on a map. The screen also contained a window with more detailed information about the ships, and whenever the user looked from the map window to the information window, the information window contained information about the last ship the user had looked at on the map. This interaction technique is appropriate because no harm is done by updating the information window as the user looks around on the map. Therefore, it does not matter whether a look at a ship is intentional or not. The non-command-based nature of this interface comes from the usage situation: The user goes back and forth between looking at the overview map and the detailed information, and always finds the relevant information without ever having to issue any explicit selection or retrieval commands. Of course, looking at a ship on the map does constitute a command, but it does not feel like a command, since the action can be performed many times without any apparent result. It is only when the user looks at the information window that the result of the information retrieval is made salient to the user.

Another non-command-based eye tracking system was presented by Richard Bolt from the MIT Media Lab and was based on the patterns of the user's eye movements instead of the individual fixations. The application was a children's story based on the book The Little Prince. The computer screen shows a 3-D graphic model of the miniature planet where the Little Prince lives, and synthesized speech gives a continuous narration about the planet. As long as the user's pattern of eye movements indicates that the user is glancing about the screen in general, the story will be about the planet as a whole, but if the user starts to pay special attention to certain features on the planet, the story will go into more detail about those features. For example, if the user gazes back and forth between several staircases, the system will infer that the user is interested in staircases as a group and will talk about staircases. And if the user mostly looks at a particular staircase, the system will provide a story about that one staircase.

Using this simple interaction technique, the user controls the flow of the narration, thus achieving some kind of interactive fiction effect without any explicit commands.

Artificial Realities

Several artificial reality systems were shown at CHI'90, including a video of the NASA headmounted display where the user looked quite groggy when returned to the real world by taking off the helmet. The Canadian Mandala system was shown both in a hands-on exhibit and at the official "Empowered" show where.; Vincent J. Vincent gave a virtuoso performance on virtual drums: The performer waved his hands in thin air in front of a camera and could see himself superimposed on the computer screen with a complete set of bongos. I tried a game of virtual ice hockey where my body was superimposed on an image of the goal. The computer kept throwing pucks at me and I could try to block them without having to worry about broken teeth. Great fun. Both these application were non-command-based in that the user did not issue any specific commands to the system (except for a few gestures reserved as commands for quitting the applications or for special-purpose actions): Generating music was done by banging the drums, and playing the game was done by reaching for the pucks as they came flying.

Many of the artificial reality systems used the DataGlove as the primary input device and Nintendo showed a version called the PowerGlove which was cheap enough to be sold with regular video games. I tried a three dimensional version of Breakout which was somewhat like squash: I could "throw" the ball on the screen by making a throwing gesture with my hand and on the rebound I could either catch the ball again by grabbing it with the glove or just swat at it to get it to fly off in a new direction. This was a very limited artificial reality and it was not even very real (for example, I had a hard time throwing the ball in the right direction). But it was a genuine product and was going to be sold on the consumer market.

Agents

Brenda Laurel and Abbe Don presented an experimental system from Apple for navigating a hypermedia space by the help of computerized agents. The underlying hypertext covered the history of the United States from 1800 to 1850 with text, images, sounds, animated maps, and video clips. Interestingly, user testing revealed a bias where users felt that those parts of the system that looked like TV were less believable than those parts that looked like books. This is in contrast to many traditional studies showing that the television news have greater credibility than newspapers.

To be guided through the system, users could activate a number of agents in the form of videotaped archetypes of the period (settler, trader, Indian, soldier, etc.). A further agent was dressed in modern clothes and represented meta-information about the system itself. This agent introduced herself by saying "when you need help, click on me."

Each agent had a simple process to determine what would be the most interesting part of the hypermedia space to go to given the current location and the perspective of the person represented by the agent. This decision could conceivably be made by an advanced artificial intelligence model of the stereotype represented by the agent, but was in fact made on the basis of simple information retrieval-type similarity ratings. The top of the screen would show several such agents and the ones who had something special to recommend (i.e., gave a high similarity score to some other hypertext node) would raise their hands or even jump up and down to attract attention.

These agents represent a form of non-command-based interaction because they allow users to navigate the information space from different perspectives without explicitly having to represent their interests. At any given time, the user can choose to follow the advice of a new agent, and the recommendations for next location are calculated without any intervention of the user and are only shown when the system judges them to be relevant.

A different sort of computerized agent was part of a show called Empowered where computers would play music to accompany human players. From the audience perspective, most of these performances were somewhat disappointing because it really does not matter to the listener whether a given piece of music is being generated by a large number of humans or by a single human plus a computer substituting for the rest of the orchestra. For the performer, however, the ability to have the computer play along must give a real feeling of empowerment. From the interaction paradigm perspective, these systems allowed the user to control the music output of the computer not by regular commands such as the specification of notes and tempo but simply by playing the trumpet, guitar, or some other instrument. The computer would analyze the user's music and play along.

So What Else is New?

Admittedly, the individual techniques and gadgets described above were really nothing new. Eye tracking has been used in user interface research for many years, artificial realities were shown at CHI'86 by NASA (an early headmounted display) and Myron Krueger (the Videoplace system) and were the focus of Michael McGreevy's keynote talk at CHI'89, the DataGlove was presented at CHI+GI'87, agents have been studied for many years at the Vivarium project, and play-along computer accompaniment for a flute player was part of the CHI'86 video show.

The main differences were that these developments had now moved from the fringe to the center place of the conference and that they had reached a realistic stage with respect to useful applications. I have always found futuristic user interfaces interesting and I have indeed reported on them in my coverage of earlier CHI conferences. This year, however, I became convinced that non-command-based interfaces were not just smart tricks but that they would form a major part of future interactive systems.

Realistic Usability Engineering

Certainly, CHI'90 had a lot of additional content besides the themes of the user interface profession and non-command-based paradigms. The Director of the MIT Laboratory for Computer Science, Michael Dertouzos, gave the keynote address and echoed last year's conference theme that user interfaces mean business. He stressed the need for us to accept total responsibility for the use of computers and not just to worry about fine-tuning the mouse or such. Dertouzos wanted us to measure the gains in total productivity from our work and to ensure that office worker productivity would grow by at least 3% per year. As a vivid example of current low productivity he reported that he had spent as much as 22 hours using current desktop presentation software to produce the slides for his one-hour talk.

Further examples of pragmatic work aimed at improving current practice were the two methods for simplified user interface evaluation presented by Clayton Lewis from the University of Colorado and myself. Both methods involved evaluating a user interface on the basis of a simply looking at it instead of relying on user testing (which is seen as expensive by some practitioners) or formal techniques (which are seen as intimidating by most practitioners). Lewis' technique was a theory-based walkthrough where the evaluator steps though an interaction with the system and fills in a checklist for each dialogue state, and it can therefore be said to be semi-formal. In contrast, the heuristic evaluation technique presented by myself is completely informal and relies on the evaluator's general knowledge of established usability principles for the discovery of usability problems. Both methods seemed to work quite well, and the best results were actually achieved using Lewis' walkthrough method. Unfortunately, the two papers cannot be directly compared as Lewis and his team has used themselves as the test evaluators for their own method, whereas we had used regular software developers and students as our test evaluators. Therefore, the superior performance reported by the Lewis team might only be due to their personal abilities and deeper understanding of usability issues and not to any quality of the method itself.

To round out the session on usability methodology, Marcy Telles from WordStar International reported on the issues involved in updating an older interface to handle modern usability requirements. She mentioned that the word processor companies had been involved in a "features war" until recently, but that they were now conducting an "ease of use war" instead since almost all word processors now include all the features most users could dream of anyway. Therefore, WordStar has been updated from its classic command-key based interface to a modern, menu-based interface. Unfortunately, changing an existing product is much harder than designing a brand new one, as one has to keep the installed user base happy by allowing them to keep using the obscure key combinations that they have grown to love. For example, WordStar released a product called WordStar 2000 with a mnemonic command set which was presumably much better than the old command set, but even so, users did not migrate from the old system to the new one.

Customized Buttons

Allan MacLean and Kathleen Carter from Xerox EuroPARC presented a paper on user-tailorable buttons that could encapsulate some command or other action and have a visible presence on the screen. The first, simple level of customization was to have parameters for some buttons, such as giving a print button a parameter for the number of copies to be printed. In most systems, the next level after parameterization would require users to program their own functions, possibly using a semi-task-oriented macro language as those found in spreadsheets. The Buttons system does allow users to program any possible Lisp function, but it also includes facilities to enable users to progress along a less steep learning curve towards greater customization.

One such facility is situated creation where the Buttons subsystem captures some properties of the total system into a button. For example, the user can capture some phrase from a text processing application in a button which will then insert the phrase whenever it is pressed. As another example, it is possible to create a button in the window system which will reestablish a window at a later date no matter where it came from.

Users can also copy and email buttons among themselves, thus leading to further customization of the individual user's work environment as that user collects relevant buttons. The buttons provide easy access to many attributes which can be changed without any need to access the underlying code. The code can also be edited to provide new buttons with slightly changed functionality. Sometimes, users who did not know Lisp programming were in fact able to modify buttons copied from other users because they could isolate relevant text strings in the code and change them.

Videos

The video show was mostly somewhat boring-maybe because many of the more advanced systems were shown live this year instead of being in the video show as they usually are. Notable videos included a programming-by-demonstration system called Metamouse by David Maulsby from the University of Calgary and a rule-based system by George Furnas from Bellcore where the production rules were bitmapped graphic illustrations of how elements of the screen looked before and after activation of each rule. Some aspects of user interfaces seem to be easier to describe graphically than by traditional textual rules.

A tape from Xerox filmed in 1982 showed their Star interface and was mostly of historical interest. Some of the operations in the interface seemed slightly awkward in retrospect, such as having to move icons on the desktop by pressing a move-key on the keyboard and then clicking a mouse where the icon is to go. This interaction technique did not use direct manipulation on the syntax level of the dialogue, even though it obviously was direct on the lexical level (specifying the location by a mouse-click, and specifying the command by pressing a dedicated key). To compensate for the lack of smoothness in the interaction technique, it was at least a generic command consistent with the way move actions were accomplished in the rest of the interface (In other systems, different methods are used for moving files in the operating system and data in the applications. This can lead to trouble for novice users trying to transfer skills from one part of the total computer system to another.) Furthermore, the availability of separate move and copy keys avoided the difficulty of distinguishing between these two operations commonly seen in systems where the complete commands are specified by direct manipulation only.

Finally, Brad Myers from CMU had produced a great video entitled All the Widgets with film segments showing the different ways various companies have implemented common interaction techniques like scroll bars or buttons. Since most of these techniques are not documented elsewhere, Myers has made a major contribution to the field, and the tape is a must-have for any faculty member teaching user interfaces and for any user interface design team working on basic interaction techniques. An entire set of student assignments can be derived with almost no effort from the video: Analyze the examples of such-and-such interaction technique: What are the main differences and how do they relate to established usability principles and empirical data? It would also be interesting to develop a hypermedia system for teaching user interface design using the variety of widgets on Myers' tape, but doing so would be a major project.


Share this article: Twitter | LinkedIn | Google+ | Email