CHI'89 Trip Report

by Jakob Nielsen on June 1, 1989

The conference proceedings can be bought onlineAustin, TX, 30 April-4 May 1989.

The excitement is back! Even though plenty of interesting things happened at CHI'88, there is no denying that it had a little of a dull feel to it as if the user interface field had slowed down somewhat. Well, CHI'89 certainly had no such problem.

Some of the credit probably goes to the conference chair, Bill Curtis himself, who was good at whipping up a good Texan cheer, even though his plan to give every conference participant a cowboy hat failed because the 2000 hats he had delivered turned out to be so poor plastic imitations that handing them out would have been no fun. Another part of the credit is due to pure random coincidence, but there is still some excitement credit left over. It does seem to me that the user interface field is speeding up again after a short period of me-too interfaces. As an example, two nice videos showed polished ways to use handwriting recognition to achieve smooth interfaces. Sure, we have seen handwriting recognition before but mostly as an isolated technology for getting text into the computer. The "paper-like interface" by Cathy Wolf et al. from IBM Yorktown Heights showed a very nice integration of the recognition technology with a formula editor where users' performance could probably be improved by an order of magnitude compared with non-gestural ways of entering formulas. Mark Rosenstein from MCC had a way of using handdrawn sketches of proposed user interface designs together with other tools in the HITS Human Interface Tool Set. He called this type of interactive worksurface "silicon paper" but the really interesting point was again the integration of the gestural interaction with other aspects of the interface.

The Great Copyright Debate

One of the reasons people are now designing new user interfaces is that various copyright suits have scared them away from just copying the Macintosh user interface. It is quite new for most user interface professionals to consider the legal implications of our work but it certainly seems to be necessary to do so. To present these issues, Pamela Samuelson from the University of Pittsburgh School of Law had organized a panel debate on the copyright of user interface "look and feel" with two lawyers to present the opposing views.

There are really two questions in the copyright debate: One is whether user interfaces are copyrightable under the current law. And the second is whether they ought to be copyrightable. The first question can be determined empirically in each legal jurisdiction by looking at the outcomes of various lawsuits. One can also argue the cases using levels of legal sophistication varying from the "pocket lawyers" on the Usenet to the quite complicated theories constructed by real lawyers and law school researchers. In the long term it is the second, more political question which is of interest, however, since laws can be changed by acts of Congress and Parliaments around the world. Interestingly, most of the arguments from the two lawyers in the panel were really of a political nature and looked at what copyright principle would lead to the best user interfaces in the future.

Regarding the legal arguments about the meaning of the current law, it seems to me from reading Pamela Samuelson's excellent review of US court decisions in the May 1989 Communications of the ACM that most of these decisions may have had the right outcome but for the wrong reasons. For example, in one case two programs had the same command names which in both cases had the first two characters capitalized (for use as shortcuts). The names themselves could apparently not be copyrighted but having the second program display these names with the same letters capitalized was ruled to be a copyright infringement because there is a large number of ways to capitalize letters. Since the second program capitalized the same letters as the first, it was judged to have copied its screen design. From a usability perspective, however, it seems obvious that one cannot just capitalize a random selection of letters in a command if one wants users to have a chance to remember the shortcuts. Given the basic principle of using abbreviations of listed command names as shortcuts, the designer actually has very little freedom to select the capitalization method, though of course there is the possibility of using vowel deletion as an alternative abbreviation method to truncation.

Many user interface specialists would probably have testified as expert witnesses that such examples of user interface design are not just based on stylistic decisions but are founded in real human factors issues with impact on the products' utility which apparently is not copyrightable. It could be that one would want to protect specific solutions to these usability issues such as e.g. having a certain set of command names or even the idea of using abbreviations as shortcuts. That would be a fair way to reward people who invent good new dialogue techniques.

The meaning of the current law is certainly murky and it was therefore quite nice for the CHI audience that the lawyers in the panel debate did not spend too much time on that. Jack E. Brown was the lawyer in favor of strong copyright protection which he felt would foster a creative environment for user interface development since people would want to come up with new ways of doing things. In quoting Bruce Tognazzini, Brown said that fixed standards would seriously impede innovation and he preferred having incentives for change. When people could not just copy previous work, they would have to come up with something new and thus move the state of the art ahead. He felt that there was so much work involved in creating an entire user interface that it should not be allowed to just rip it off. Also, one could not just limit copyright protection to 100% copies since that would encourage evasion but on the other hand it should be no problem to reuse a single icon. In a comment to this, Annette Wegner from Apple said that she certainly did want protection for her icon designs just as people get copyright when they create logos.

Thomas M.S. Hemnes presented the liberal view and also argued that his approach would lead to the best user interfaces. He felt that strict copyright protection would lead to more slow advances in user interfaces since user interface design is an accumulative technology where it should be possible to build upon previous advances and refine them. (An example of a non-accumulative technology is the pharmaceutical industry where a drug which works for one thing will generally have nothing to do with a drug designed for another thing.) As an example of another accumulative technology, Hemnes referred to the aircraft industry where the only way to build a state-of-the-art aircraft in the US during the First World War was to have the government impose required cross-licensing between the various inventors who each held patents on crucial aspects of flight technology. Hemnes also claimed that only 1% of the value was coming up with the new idea and that the 99% was in implementing it. I certainly agree that duplicating modern interaction techniques take a lot of work, but I do have more respect than that for the hard work of getting the idea in the first place which often requires extensive amounts of research and hair pulling. Because copyright infringement is proved on the basis of both access to the previous work and a substantial similarity, Hemnes feared that either new user interface designs would have to be made by people without knowledge of the field or that all designs would be made as different as possible, leading to risk of user errors. Finally, Hemnes was opposed to the use of copyright for user interfaces since it extends over the lifetime of the author plus an additional 50 years which is really the same as "forever" in the computer business.

In a rebuttal to this last statement, Brown said that CP/M was once thought to be the only good way to use personal computers but now just a few years later it has been passed by by innovation. So a user interface is not forever. Nobody will get a lifetime monopoly on making usable computers just because they get the copyright to a certain dialogue technique.

As an example of negative impact of copyright, Mike Lesk from Bellcore who was the panel discussant, referred to the copyright on typefaces in Europe. This had made it difficult to produce laser printers with the result that now the laser printer business is in the United States where fonts cannot be copyrighted. In an even more general analogy, Lesk said that the publishing business would not be very popular is every company had to print different sizes of books.

The Specter of Standards

At the same time as everybody was discussing whether or not they will be permitted to use the famous trash can icon, a sub-committee under the International Standards Organization, ISO is gladly working on producing a draft international standard for icons in the user interface by November 1989. Nigel Bevan from the National Physical Laboratory in England showed a poster on the ISO activities in user interface standardization but unfortunately it did not present the actual icons since they are still under deliberation. The poster was fair enough to also mention the potential disadvantages of user interface standards which mainly are that standards are premature as long as we do not yet know enough about what makes truly usable products and that a standard may inhibit advances in the field. In spite of this, various committees are about to start cranking out draft standards for comment by the active researchers and practitioners in the field.

One slightly unusual standard is the proposal for a standard for the usability process itself and not for the end result. There may be more promise in such a standard since we do know quite a lot about which methodologies to use to increase the likelihood of getting good products. And in many cases, we need some form of external pressure to get various companies to actually devote sufficient resources to usability. On the other hand, there are also many other cases where non-standard methodologies are more appropriate. There is a big risk that many people who do not have real expertise in the our field will come to equate usability with having produced the correct set of voluminous reports in the same way that certain military documentation standards can result in a lower quality of documentation. We can only hope that the specter of standards will haunt Europe and the rest of the world in a spirit similar to Glasnost with sufficient openness to enable user interface professionals to fit their solutions to the unique characteristics of each problem.

Demos are not Enough (Larry Tesler)

Often a program will look great when it is shown in a demo by its creator or other wizard user. But Larry Tesler from Apple warned against trusting such great demos as indicators of the true usability of an interface. We need to know whether it will really be that easy for other people to use the system. As an example of this, Tesler showed some video tapes from user testing of a prerelease version of HyperCard. One of the problems was that newly created buttons had their visibility attribute set to invisible by default. Expert users zipped right through button creation since they already knew what was going on. Anne Nicol tested some novice users, however, and they were very puzzled when their "create button" commands apparently had no result. They had indeed created a button, but it was invisible...

Another usability problem was the concept of the "marked card" which was used as a hidden state for certain hypertext related operations. An expert may have been able to grasp this concept easily but for most novices the only card they could really relate to was the one on the screen. Because of these findings, the released version of HyperCard makes nice visible buttons which look like standard Macintosh interface objects so that users can easily recognize them as being buttons and there is no "marked card."

Since HyperCard is one of the most talked about user interface tools in recent years, it was exciting to see videos of the original design and hear a discussion of some of the user interface issues leading to the redesign for the current version. Tesler dared take the "Nielsen challenge" I put forward about in my CHI'88 report where I asked CHI presenters to show us their user interface designs both in "before" and "after" versions and let us judge whether their usability work had improved the design. Actually, Tesler did not show the "after" version of HyperCard but it must have been very familiar to most of the audience. In any case, my verdict is that of these two designs, I will pick the released version for my user interface projects.

Tesler is the Vice President of advanced technology at Apple and it is not often that one hears a corporate VP from a major computer company discuss internal problems and mistakes. According to Clayton Lewis who introduced him as keynote speaker, Tesler was the person who was responsible for introducing user testing at both Xerox and Apple, however. And for people who are used to user testing it is very natural that such a test will reveal some difficulties in an initial design. This will always happen-and that is why you do the test. So an interface snag in an original design is not a mistake as long as it is found and corrected.

Tesler also discussed other aspects of the user interface development process and said that it is not the job of the user interface to access the program but it is the job of the program to implement the user interface.

My CEO Cares More About Interfaces Than Your CEO

The most popular event at this CHI according to the size of the audience was the panel entitled "My user interface is the best because...." I was lucky enough to at least get to sit on the floor but many people couldn't find room at all. In this panel, people representing the Macintosh, NeXT, Open Look, and Motif interfaces presented the interfaces and boasted of their strengths.

Actually, the panel might almost have been entitled "My CEO cares more than yours" since all the panelists took great care to explain how deeply committed the CEOs of their various companies were to having the best user interface on the market. In this category, I think the prize must go to Bill Parkhurst from NeXT who reported that his CEO (Steve Jobs) had come into his office Saturday night at midnight to discuss user interface design issues.

Parkhurst also explained that the NeXT machine's interface was designed to take advantage of capabilities which had not been present in earlier, smaller personal computers. The NeXT design had to deal with multiple processes and a large disk file system with many files. Therefore they integrated inter-process communication with the basic interface: for example, the NeXT machine has the ability to hypertextually link from the WriteNow wordprocessor to its online Webster's Dictionary to look up the definition of a word. They also had a large screen, so they put their menus in windows instead of at the top of the screen. Because of the fast CPU in the NeXT machine, they could scroll or move entire windows and not just their outlines. They also decided to go with a multi-color interface from the start to avoid what Parkhurst called the "one-bit mindset" in interface design. Actually, they only have two bits per pixel in their current screen but that is still a lot more than one. They took advantage of the grayscales offered by their screen to provide a richer look and a partly three dimensional appearance of objects through a consistent lighting model.

I was not really sure what Parkhurst meant by "lighting model" but luckily I had breakfast with Bill Verplank the next morning and got the term explained. The idea is to pretend that the interface actually is three-dimensional and that the various elements in the dialog boxes etc. have real depth. You furthermore pretend that this three-dimensional object is flooded with light from a single source placed in one specific corner of the screen and note how the shadows and highlights would fall. And then the graphical design of the actual (flat) interface objects are colored to reflect these shadows and highlights so that they appear almost three dimensional to the eye.

NeXT also produced powerful development tools to help third party developers and to ensure that their products are consistent with the the NeXT interface. During a post-conference visit to Texas A&M University, I took the opportunity to try out a NeXT machine under more quiet circumstances than a conference demo and immediately found a consistency problem in the user interface: When users quit an application, they are sometimes asked "Are you sure you want to Quit?" and sometimes "Do you really want to Quit?" Of course this inconsistency does not constitute a usability catastrophe but it does indicate that fancy graphics will not ensure conformance with all the interface guidelines. A worse problem is that these questions do not point out the real issue which is whether people want to save their files first before their work is irrevocably lost upon quitting. And that can lead to a catastrophe for novice users who do not understand that they are editing copies of the permanent disk files.

Tony Hoeber from Sun presented the Open Look design which he claimed was legally unencumbered. He kept stressing that they built on the foundation designed by Xerox at PARC and that they had licensed the rights from Xerox. Actually, Sun and AT&T were willing to warrantee the use of Open Look against any copyright challenges. This may be a significant advantage in the marketplace but it is certainly language which we have not heard in the user interface field before. If companies would also start giving warrantees that users are able to learn their interfaces in a given amount of time, then maybe we will be getting somewhere. A member of the audience said that he was from a company which spent millions of dollars on buying software each year and that he would have liked to see real data proving that a specific interface was good. He especially wanted numeric ratings of effectiveness somewhat like the MPG ratings for cars.

Anyway, Open Look did want to be close to "accepted standards" (the M-word) so they had used e.g. scroll bars to move in a window but with changes such as having the direction arrow buttons on the elevator box instead of at the ends of the scroll bar to reduce the need for mouse movement.

Open Look was designed for environments with quite different I/O equipment, so they had to deal with mice with 1,2, or 3 buttons as well as with varying displays etc. Unfortunately, Hoeber did not discuss the design problems raised by these hardware differences in any great depth. He did say that they had produced a 450 page user interface style guide (250 pages design rationale and 200 pages with examples of good and bad designs). They had a coherent use of color to get foreground/background effects where the foremost item on the screen is the user's current selection and is given the most highly saturated color. In this way, colors are used to achieve a three-dimensional look to the interface.

Ira Goldstein from Hewlett-Packard described the Open Software Foundation's Motif interface. This design was mostly compatible with the IBM Presentation Manager, and in general Goldstein claimed that none of the currently popular user interfaces differ by more than 20 %. In the future, customers will require that products from different suppliers are interoperable. The most obvious example today is the need for networking capabilities, but user interface consistency will also be a demand.

Motif had been developed in just one year, and Goldstein claimed that it was the best interface because it was the only one to satisfy the usability goals both for the development process and for the actual end product.

For this panel, the moderator might have selected only one of the contesting standard Unix interfaces since Open Look and Motif seemed fairly similar to me. Of course there are lots of differences but the format of this panel did not offer much opportunity to analyze specific interaction techniques in detail. From a practical perspective, of course, a lot of the people in the audience were probably more interested in comparing Open Look and Motif than they were in hearing about NeXT and the Macintosh so it might have been the right decision anyway to include both. In any case, I sure would not have wanted the responsibility of picking one of these two contenders for the Unix throne before the market has spoken.

In a rebuttal to the other presentations, Tom Erickson from Apple said that the colored 3-D look may have an artsy effect but that colored text is hard to read. So he preferred the simple black-and-white model for the sake of readability. Erickson also discussed hardware usability which had not been mentioned by the other speakers and said that the Macintosh was getting to be very easy to set up. For example, its cables did not require the use of any special tools. In an "out of the box" study, they had found that some users needed to spend 2-3 hours from getting a Mac in a box until they had a running system but these problems were being addressed.

Erickson also discussed the Macintosh toolbox and other tools for developers which he said were necessary but not sufficient to guarantee good interfaces. The key problem was that an interface is not produced by a single source but by a collection of many companies, so Apple had chosen to also provide cultural support for their developers by having user interface evangelists and having their CEO, John Sculley "talk more about user interfaces than four other CEOs combined."

As a result, the Macintosh had a community of more then 2500 developers who were all interested in user interfaces, and Erickson's claim that the Mac not only was the best interface but would also remain the best interface rested on exactly this third party development community and its high degree of interface orientation.

Towards the end of the panel session, one person from the audience dared to attack the trend towards graphical interfaces and defend the traditional command line interface to Unix. Unfortunately this person was booed by the rest of the audience because there is so much trouble with standard Unix that having to learn it is regarded by many user interface specialists as almost a fate worse than DOS. Of course it impolite to boo people and there are two additional reasons why this lone CLIer should have been treated with more respect. One is that the ultimate usability parameter is whether people prefer the interface, and this person obviously did prefer the line oriented interface (as do many other hard-liners who were probably not at this conference at all). The second, more conceptually important reason is that text interfaces can do a lot of things which graphical interfaces currently have difficulties doing, such as various history mechanisms (including multi-level undo-redo, macro editing, etc.) and aliasing.

Interfaces Mean Business

This year's conference theme certainly was that Interfaces Mean Business. IMB is not just a permutation of IBM even though they have been trying recently to move their PS/2 boxes with the argument that the Presentation Manager is more usable than DOS. At this conference, we heard a corporate vice president of a major computer company who was not a stuffed shirt but could actually give a keynote talk addressing real interface issues. We heard representatives from several companies brag about the involvement of their CEOs in usability. And we heard about the user interface copyright in commercial software.

The conclusion from all this is that user interface design is rapidly getting to be one of the most important aspects of the computer business. As a matter of fact, it may soon be the singlemost important aspect when almost all computers can do almost all you want them to do anyway. Then the real difference between computers will be their user interfaces. Because of this interest in the user interface field, the conference received coverage in several newspapers from the local Austin paper that remarked that the CHI attendees had been too intellectual to stir up the local nightlife very much to the New York Times which covered the legal debate and described the Apple Info Kiosk and several of the video tapes.

By the way, it was announced at the SIGCHI business meeting that we are now the fastest growing of all the special interest groups of the ACM both when measured in absolute numbers of new members and when measured in relative growth. This is yet another indication of the growing importance of the user interface field.

User Interfaces as a Profession

Before we start to relax and think that the world has been saved, we need to deal with the large number of people who still do not involve user interface professionals in user interface design. It is very common for people who are not aware of the user interface literature or current events in the field to view themselves as qualified to comment on interface issues. Of course we should be glad that non-professionals are interested in our field but we need to change the attitude that everybody is equally qualified since "we are all users."

As an example of this category of problem, Jonathan Grudin presented the results of a survey of user interface design in large corporations. Over half of the user interface professionals said that their manager's manager did not understand their work, thus pointing out one of the reasons for the lack of recognition many of us face. Grudin asked software engineers to rate their knowledge of various fields and found that 49% rated their knowledge of human factors as good. This should be compared with the much smaller proportion of software engineers who think that they have good knowledge about industrial design (8%) or how to conduct training (18%). The only profession doing "worse" than human factors in the study was technical writing where 64% of software engineers believed that they had good knowledge.

On the other hand, we should note that 93% of software engineers said that they did need outside help with human factors issues but that only 21% said that the experts were available when needed. So it may be that the need is there but is not fulfilled by the current small budgets for user interface professionals in most organizations. And as a result of this, programmers are forced to pick up bits and pieces of user interface design knowledge.

Only 57% of user interface professionals got involved in projects before the start of implementation while 100% of course said that they would like to get involved at such an early stage of the system life cycle. The only group doing worse were the technical writers of which only 28% got involved before the start of implementation.

Guerrilla CHI

My suggested solution to the lack of use of user interface methods is to use a "guerrilla CHI" strategy where we user interface professionals penetrate the various development groups and "live with the peasants." We can get other software professionals to use discount usability methods which are cheap enough that even the smallest company can afford them.

I promoted this philosophy at the closing plenary where I was one of the speakers. Unfortunately my report on events at this session is not nearly as extensive as my report on other events at the conference because I did not concentrate on making notes. One memorable quote which I did write down was Jan Walker saying that the 80s had been the decade of the novices in CHI research but that the 90s would be the decade of the expert user.

At the plenary panel, one person from the audience commented that we had abandoned the majority of the world's computer users in our fascination with advanced graphical interfaces. He had not seen any studies of the text-only terminals which are still used by millions of people. Well, the Canadian "conversational hypertext" discussed below actually was a text-only system using the line-oriented interface which is an even older interaction paradigm than the full-screen systems used on most text terminals. But in general, this voice of the people was right: We do look more at neat systems than at traditional systems.

Jan Walker's notecd that this was not really a problem since everybody would get modern, graphical computer systems within a very short time. I am not totally convinced of this as I know several companies which have major trouble with technological inertia because of large investments in obsolete terminals which people are not about to just throw away. So unfortunately, systems will still have to be usable through text-only interfaces in many years to come. We can hope for cross-paradigm interfaces which can be used both on text-only terminals and on graphical workstations. The point I made during the panel was that most of our methods would also apply for the development of textual interfaces so that it would indeed be possible to make usable interfaces even on old terminals. Of course, there are also many things which cannot be done and an important point which I did not make then is that we need to take care that the lowest common denominator effect does not drag the development of new systems so much into the mud that users with modern equipment must settle for the same interface offered to people with old equipment.

Two Dimensions are Not Enough

A second conference theme in addition to "interfaces mean business" could be that two dimensions are not enough for modern user interfaces. As mentioned under the discussion of the "my interface is best" panel, the new interface designs coming out currently use pseudo-three dimensional graphical design and lighting models. This is only a small Æ and does not really change the interaction techniques used in the interfaces. But we are also seeing a real trend towards a bigger Æ in the form of actual three dimensional interfaces. This larger change could herald a new generation of interaction paradigms.

Since the conference was in Texas, it was natural to have one of the invited speakers come from NASA. Actually, Michael W. McGreevy came from the NASA Ames Research Center in California, but he still gave a very expansive speech on the topic of personal simulators and planetary exploration. The basic idea of the personal simulator is to create an artificial reality in which the user can be immersed in the data rather than having the user manipulate the data from the outside. This interaction technique is especially useful to make sense of the huge amounts of data gathered from space probes.

From a user interface perspective, these virtual realities should offer the user a utilitarian realism which McGreevy defined as having the environment seem real with respect to the exploration, manipulation and general interaction which the user wants to perform.

McGreevy gave a short history of artificial realities from the Chinese emperor's tomb containing an artificial army of terra-cotta soldiers to the flight simulators which trained thousands of pilots during the second world war. It is interesting to consider that flight simulators were regarded as toys before WWII and were mostly sold to amusement parks whereas now they are seen as indispensable. Maybe the same change in attitude will take place towards some of the more fanciful current interface ideas such as the use of iconic sounds or three dimensional video.

In 1984, McGreevy decided to get into personal simulators through the use of head-mounted displays. The only problem was that available head-mounted displays cost a million dollars so instead NASA built their own prototype system by using LCDs from pocket TVs which were of course much cheaper because they were consumer electronics. These 1984 displays only had a resolution of 100 x 100 pixels for each eye but their current system has 320 x 240 and they aim to double that in the near future. This means that it will soon be possible to have users wear a head-mounted display which gives the same resolution as a workstation but of course does so three-dimensionally because each eye sees its own display. These prototype systems are intended for use on Earth but they are also developing a new design for use on board the space station which will be smaller and transparent to allow the user to see both the physical reality and the artificial reality at the same time.

One of the initial applications of the NASA head-mounted display was a walk through the space of an air traffic control situation. This is an obvious three-dimensional situation. Other applications include telerobotics and planetary exploration such as rover path traversal planning but they also have plans to test a three-dimensional multimedia dataspace like the science fictional cyberspace where the user can be surrounded by pop-out data items.

Another system with some three-dimensional texture is Myron Krueger's Videoplace which had also been displayed at several earlier CHIs. In Videoplace, the user's body serves at the "input device." Actually, the input is only two-dimensional as the user is filmed by a single camera but there is still some three-dimensional feel to being able to walk, dance, and gesture in front of the computer. Videoplace displays a silhouette of the user on the computer screen and lets that silhouette interact with various computer-generated creatures in a playful way.

What was new this year was that a Videoplace system with full-body input had been connected to a Videodesk system at the other end of the hall where another user could input hand and arm movements to the system. The computer displays at both ends of the hall showed the same image consisting of the body silhouette from the Videoplace and the hand-arm silhouette from the Videodesk superimposed on each other. Because of the differences in scale, the arm of one user was displayed as much larger than the body of the other user, thus leading to some fun (and potentially dominating) interactions between the two users. Because of this, the Videoplace/Videodesk setup this year could be seen as a computer-supported cooperative work (CSCW) event instead of a user interface event. Or at least as "computer-supported cooperative play." as one participant observed. It is not all that important exactly how we view this because "the sausage tastes the same from both ends," to cite a proverb from the Danish Cultural Heritage. Proverbs can mean anything you want, and in this case it means that it is rewarding to view Videoplace as an innovation in the user interface field (which is why it has a place at a CHI conference) at the same time as it is rewarding to consider the opportunities for using computers to let people play together. And the "sausage" (=Videoplace) is one of the important systems to taste these years.

Drama and Personality in Interaction Design

Krueger was also on the panel on drama and personality in interaction design where he gave the history behind the development of the Videoplace interface. The panel moderator was Joy Mountford from Apple who said that one place to look for inspiration for interface design was the performing arts and especially the theater. She felt that animated graphics had not yet had the impact on user interfaces that they should.

As an example of the use of drama, the interface consultant Brenda Laurel showed a video of a prototype computer system for the teaching of history. The computer would show the user selected video clips where Laurel played various roles such as e.g. a pioneer woman from 1849. Laurel also discussed the use of agents in more general types of user interfaces. The user can delegate all or part of a task to a computer agent which will then perform it. Candidates for delegation include tasks which are too tedious or time consuming for users to want to bother with them, tasks where the computer has a unique expertise (e.g. how to route packets through a network), or tasks where the computer is asked for its judgement (such as a travel planner). This does not mean that agents need to be perfect AIs. Laurel actually said that they would have to pass an "anti-Turing test" so that users would understand that the agents were not real people. They should be less complex to understand and control but more interesting then real people.

Laurel argued that agents should be seen as dramatic characters with certain traits which users could use to understand and predict the behavior of the agent. She felt that agents should have cognitive hooks for understanding and emotional hooks for engaging the user. This view might be related to a presentation by Clayton Lewis in another session at the conference (discussed further below). Lewis had built a model of how users explain causality in interaction events on the basis of eight heuristic rules. One example of such a rule is the "no baggage" rule saying that every element of an interaction controls something (i.e. the designer has not introduced the element in the interface for no reason). Laurel wants users to use another kind of heuristics to understand interactions and it seems to be an open question whether users could then switch back and forth between various ways of understanding different aspects of an interface or whether the entire dialogue model would have to be based on the agent principle.

In another presentation at the Drama panel, Laurie Vertelney from Apple discussed the use of video taped scenarios in preliminary testing of user interface ideas. Vertelney argued that it is best to be able to tell a good story about what your interface will do before you implement it. These "good stories" can be constructed as scenarios of some specific things users might do with the system and how they would do it. The advantage of the scenarios is that they can depict systems which have not yet been built and which are going to use futuristic technology. They can then be used to engage test users in the designs much better than any written specification.

Vertelney used videotaped scenarios at Apple, one of which was called "A Day in the Life" and basically followed a character named Joe from breakfast to bedtime to see how future technology would be embedded in his life. This kind of video tape constitutes a concrete visualization of your design ideas and can be shown to people to get feedback much cheaper than actually building the interface. The disadvantage is that users do not get to experience the interactive nature of the system from watching a video tape and that the feedback is of a qualitative nature which has to be interpreted.

An additional disadvantage from producing video tapes of how you guess that future technology might be is that the scenarios may be interpreted as being real. Actually Apple had received an order from a customer for delivery of the Knowledge Navigator (another far-out design which exists only on an Apple video). Furthermore, Business Week had actually said that one reason for a drop in the value of Apple stock had been that they had "lost credibility" by showing the Knowledge Navigator video of "unrealistic technology." Personally I would prefer to invest in a company that looks farther ahead than just to the next two quarterly reports but that may not be the view among certain stock market analysis.

Wang Freestyle Desktop

When I read that Wang was demoing a product called Freestyle, I have to admit that I was somewhat underwhelmed. I mean, who needs to see yet another drawing program. Word of mouth during the conference had it, however, that Freestyle was a hot design worth seeing, so during a boring lecture I decided to give it a try. It turned out that Wang Freestyle did indeed represent interesting interface innovations and that it was not related to the popular graphics program Freehand with which I had probably confused it subconsciously.

The first remarkable thing about Freestyle is that it uses a stylus on a tablet instead of the ubiquitous mouse. The stylus is of course not so interesting in itself but it is an indication of the nature of the Freestyle product. I would characterize Freestyle as a desktop metaphor to the third degree . It has a reasonably big monochrome graphics display which shows images of the various objects in the system. A file is simply a full-screen sized drawing area which is shown either full size (taking over the screen) or shrunk to a miniature on the "desktop" screen. These miniatures are somewhat larger than traditional icons and show a reduced image of the drawing in the file. In an unusual deviation from most computer systems, the files are not named, just as you do not give names to the pieces of paper you have on your real desk. To find a file, the user must be able to recognize its miniature and its location on the desktop screen. The one exception to the namelessness of the Freestyle universe are the objects representing recipients of electronic mail. Here named generic icons are used, even though I might have considered representing a person by a scanned photo. The Freestyle demo was given by several knowledgeable Wang people, including Ellen Francik who had conducted several prerelease usability tests. She explained that one reason for using explicit names for email recipients was that users would worry whether they were mailing to the correct person since it is sometimes a disaster to send certain letters to the wrong person.

To collect several pages together, you place their miniatures over each other and apply a stapler (which is also a graphical object on the screen). Unfortunately, a stapled set of pages are shown just by the miniature of the one page which happens to be the topmost in the pile. Miniatures are much higher in interface richness than normal icons but to achieve the ultimate fidelity to the reference system of the real desktop, it would have been nice if miniatures of a pile could have indicated the thickness of the pile in some way.

I didn't like having to move pieces of paper on the desktop screen by pressing down on them with the stylus. Essentially the same action feels much more natural using a mouse than using a stylus. This is probably because of the different connotations of the two devices: Moving a stylus is like moving a pen for making marks, while moving the small box called a mouse involves the same muscles as moving physical objects in general: the mouse is higher in the articulatory directness mapping between the physical movement and the lexical token specified ( move ). Exactly the same phenomenon of course works in favor of the stylus when it comes to sketching or even writing in the Freestyle graphics editor. I wrote a letter and signed it with no trouble using the stylus-but try to produce a recognizable signature using a mouse! Another advantage of the stylus is that it can be turned over to have its top used as an eraser. In this way, the user is freed from having special commands to select between drawing mode and erasing mode as in almost all drawing programs. This method of using physical attributes of input devices to determine their semantic meaning is similar to the physical eraser used in the University of Tokyo Tron project which I discussed in my trip report from the 1988 Fifth Generation conference.

A final interesting feature of Freestyle is their ability to integrate graphics, spoken comments, and animated gestures in one email message. It was possible to take e.g. a scanned road map, use the stylus to draw in directions to go somewhere, and record a running commentary about what landmarks to watch for. The resulting file could be sent by electronic mail to another Freestyle user who could play it back and watch the directions being redrawn on the background road map synchronized with the spoken comments. In the program for SIGGRAPH'89 this kind of interface is called a conversational document which is not really true in my opinion since the receiving user cannot alter or question the presentation designed by the sending user. For true conversational documents we must await sufficient advances in AI to allow the inclusion of model of the sending user's thinking about the document's topic which is detailed enough that receiving users can query that agent in the same way they could discuss the issues if the sender had traveled in person.

To Hyper or not to Hyper

Since I am currently doing a lot of research and consulting on various hypertext issues, I dutifully attended the session entitled "Hypermedia." It actually turned out that the papers in this session were not all that much about hypertext while there were several hypertext papers presented in other sessions.

First, Thomas Whalen from the Communications Research Centre in Canada presented a so-called "conversational hypertext" system. The word "conversational" referred to the use of the old question-answer dialogue style instead of the point-and-click style more commonly used in hypertext. The basic principle of the system was to present the user with a given piece of text (corresponding to a node in the hypertext network) and then await some natural language input from the user. The system interprets the user's input according to the set of legal hypertext links out of the current node and chooses the next node as that giving the best match with the user's input.

The two good points of this interface design are that it provides a means for getting a fair natural language interface with only a small investment in AI (because of the extreme context-sensitivity of the interpretation) and that it provides a method for accessing a hypertext through a line-oriented interface. Most people of course would not want to access hypertext through a line-oriented text-only interface, but for Wahlen's application it did make some sense. They had developed a hypertext information base about AIDS and wanted to make it available to dial-in use by home computer owners having a modem.

The interface was developed by iterative design which had raised the proportion of user queries successfully answered from 20% to 70%. In the beginning when the success rate was low they looked at unsatisfied queries and modified the information base to be able to handle them. According to Wahlen, an added advantage of their approach to hypertext and natural language is that it would be very easy to scale up to larger information bases: It would just require getting a bigger computer.

A question from the audience was whether Wahlen had considered translating the information base to French or other languages. He answered that they had indeed considered doing this since they worked in Canada, but that it was not just a question of translating the text. The very information base tended to be culturally specific and people would ask different questions. For example, a visitor to their lab from Japan had entered questions such as "Where was the origin of AIDS" and "This was a nice demonstration, thank you very much," neither of which their natural language recognition had been able to handle based on its iterative development from questions asked by Canadians.

In a slightly more ambitious project, Bob Glushko from Search Technology had at least used a limited windowing paradigm for a hypertext system developed to run on ordinary IBM PC ATs. The focus of Glushko's talk was not so much on the hypertext aspects of his system as on the general problem of transforming printed text to an online form. He had seen several good example of hypertext systems but no clearcut methodology for transforming existing text, so they set out to develop what he called hypertext engineering as they were actually doing the work. The project was converting the Engineering Data Compendium from a four volume printed text with 3000 pages of text and 2000 illustrations to a CD-ROM version which could be displayed on an IBM PC screen. Glushko stressed that this was big enough not to be a toy "Hypertext for Hobbyists" type project.

One of the main problems was not so much related to hypertext as such but to the limitations of the PC screen. The illustrations could only be displayed in a coarse resolution and it was impossible to keep the page layouts from the original book. The page layout problem was especially bad because the printed book had been carefully designed with the two-page spread as the basic unit to have related text, graphics, and tables visible at the same time. In the electronic version this was not possible, but according to Glushko, the use of alternative hypertext access structures still made it possible for them to do a decent job of meeting the users' needs.

During this project, they had started by a document analysis to understand the logical and physical structure of the existing book. They had studied the existing access structures such as the index and table of contents and found that this book already had a rich internal access structure which they could use in their hypertext. Glushko felt that it would be impossible to automatically construct an index of the same high quality as that provided by the original designers of the book, so they based their design on that.

In addition to their own analysis of the printed book, Glushko and his colleagues also wanted to learn about the rationale of the book design from the original designer and editor. In general, Glushko was very big on wanting to understand how users would work with the document since he wanted to be user-driven and document-driven rather than technology-driven as other hypertext projects have been. So he had based the construction of the hypertext links on a task analysis of what users would be likely to want to do with the document. His basic philosophy was that one should only add a hypertext link if one could find a reason from the task analysis to put it in.

Glushko did not want to support blind navigation in hyperspace but only the user's need for information in a specified context. This sounds extremely convincing, especially considering the extreme importance of the task circumstances on users' performance in user interface studies. On the other hand, the Bellcore designers of SuperBook (discussed below) specifically promote its abilities to support information needs which had not been taken into account by the author. The key question here must be whether we actually can conduct a sufficiently thorough task analysis to be sure to capture everything that a reasonable user might want to do with the document - But how do we know that "unreasonable" users are not the ones who could really do some great and innovative work if we only had supported them?

The third paper in the hypertext session was The Tourist Artificial Reality, presented by Kim Fairchild and Greg Meredith from MCC. They started their co-presentation with a very nice survey of other artificial realities and in passing mentioned Randy Smith's Artificial Reality Kit as the best current system. They also gave a taxonomy of artificial realities which seemed to be well thought out and useful. I say "seem" because they showed their analysis on a series of slides packed with text and only showed each slide for a few seconds so that it was really impossible to gain an understanding of the analysis. Also, this analysis is not in their paper in the proceedings but rumor has it that it may be in one of the journals Real Soon Now.

Fairchild and Meredith had implemented a system based on a tourist metaphor and it was probably because of this travel aspect that this paper had been placed in the hypertext session. They showed a video of the system which was unfortunately somewhat overdone to get laughs and did not really convince me of the utility of their approach. But of course, plain fun in computer systems should not be disparaged. In any case, they said that the most fun part of artificial realities came when you had several people interacting with each other in the reality such as in the Videoplace/Videodesk setup discussed above.

Frank Halasz and George Furnas gave a joint presentation of the big picture as discussants for the session. They defined two principal metaphors for information access: One method is making the information space explicit to the user and as visible as possible and then using travel or navigation as the metaphor for moving through the space. The alternative method is the information retrieval metaphor of using searches as the access mechanism. The user is not presented with any explicit conceptual model of the information space or its structure but works in an "ask and you shall receive" mode relying on an agent in the computer to fish relevant nuggets of information out of a black bag. The tourist artificial reality project tries to push the space metaphor while the conversational hypertext project tries to split the two metaphors by having users navigate but having them do so through queries and hiding the structure of the information space. Glushko's hypertext engineering was finally a mixed approach where the access metaphors were picked from a mix according to the task at hand.

Another system which can be classified according to this metaphor dimension is the SuperBook from Bellcore which was presented by Dennis Egan in a session I did not attend. But since I visited the SuperBook group at Bellcore when I arrived in New York before going to Austin, I can comment on it anyway. In SuperBook, the structure of the information space is made explicit to the user through the use of a fisheye view of the book's table of contents. And at the same time, the user can get directly at nodes buried deep within this structure by performing full text searches. The result of such a search is then shown integrated with the overview diagram by annotating it with the number of hits in each node or supernode (holophrasted set of nodes in the fisheye view). Therefore we could say that SuperBook is a way of marrying the two metaphors in a single design. A related approach I am currently trying, is to construct hypertext links between nodes automatically on the fly based on similarity ratings with weights determined by user queries and relevance feedback.

Another good hypertext paper was Gerhard Fischer et al.'s proposal for a design integrating an AI system which advices on design issues with a hypertext system containing the rationale for the advice given by the AI system. The hypertext system uses an alternative implementation of the IBIS Issue-Based Information System method made famous by the MCC gIBIS system to structure the arguments for and against the various design options, and the AI system can then dump the user at the location in this hypertext which corresponds to the user's current undecided design problem. Many people talk about the use of AI in hypertext but this is one of the few examples I have seen where I could easily see that the AI and hypertext aspects would complement each other resulting in increased usability. The domain of the system was somewhat peculiar (design of kitchens) but that seems to be true of a lot of AI systems. AI components with such glamorous names as sink-critic and refrigerator-critic would find out if the user proposed placing a kitchen element in an inconvenient location relative to other elements. These rules could give the user explanations such as "sink should be NEXT-TO a dishwasher" in the traditional AI style. But assuming that the user either did not understand that explanation or wanted to take issue with it, the system would transfer the user to the hypertext system for further explanation and perusal of argument structures. Actually it could not really do so because the AI subsystem ran on a Symbolics while the hypertext subsystem originally ran on a Macintosh-but that has been fixed by a new version which moved everything to the Symbolics.

Information Kiosks

The Apple Human Interface Group had set up a bunch of Macintoshes at the conference for use as information kiosks. They contained information about the conference program with semi-animated sound-and-graphics presentations from several of the conference speakers as well as a slightly confused hypertext guide to the city of Austin. The most popular part of the info kiosk, however, seemed to be the CHI Yearbook with digitized photos and addresses of many of the conference attendees. Apple had also organized a number of photo booths with digitizing cameras and Macintoshes where people could fill in their name and address as well as a question they would like to ask the other conference attendees. The information gathered from these photo booths was transferred to the Info Kiosk Macs on a regular basis by the primitive but effective means of moving an external hard disk around between the Macs and copying the updated information over.

My question in the Info Kiosk was "Do you own software which you don't use? And why don't you use it?" In my own case, I have observed that I have bought a large number of software packages over the years which simply sit unused on the shelf. For example, I own about 4-5 other word processors but still use Microsoft Word exclusively.

Reading other people's questions was fun and one reason why many attendees spent more time browsing the Info Kiosks this year than have been the case at earlier CHI public computer displays. Another reason was that the digitized photos provided a method to find out how other people looked so that it was easier to meet them at the conference. And there was also some utility to the information about which hotel people stayed at. All these advantages followed directly from the dynamic nature of the combined photo booth and Info Kiosk approach, but of course it was also a lot of work to design such an extensive system and to keep it continuously updated.

In spite of the nice work Apple had done on the Information Kiosks, the part of the system having information about the conference program itself lost out to the printed participants' program. Claudia Raun had designed the best conference program ever. It was physically easy to use because of its size and spiral binding which made it possible to keep it open at the listing of the current session. All papers, panels, demos, and special interest group meetings taking place at any given time were listed on the same page of the program thus making spur of the moment decisions easy for conference participants. It seems that somebody actually studied the users and their task before doing this design. It is currently impossible for any hypertext system to compete with such a well-designed printed information package which can be accessed in a crowded lecture room or during a coffee break while one is in line to get the last strawberry.

Artifacts as Theories

The very detailed interaction theories do not seem to have a very large impact on real user interface design. Jack Carroll from IBM Yorktown Heights gave a talk on the theory problem in HCI and said that some people would respond to this problem by saying that the reason was that there is no possible theory. It could also be, however, that our problem is that we have been locked into positivism in a search for a normative philosophy of science to tell us how science should be, how designs should look, and therefore enable us to deduce designs on the basis of a scientifically tight theory. But experience has shown that that kind of theory only leads us to toy-scale designs.

An alternative would be a descriptive philosophy of science which would study user tasks and design artifacts. We know that the artifact changes the task and vice versa, so that we have a design cycle. This approach leads to a design-based theory such as e.g. the concept of direct manipulation which of course was embodied in real designs long before the emergence of theories trying to explain it.

If we take this alternative approach, we don't need to conclude that there is no theory in HCI. Instead we can take artifacts seriously since they embody testable "claims" about usability. Carroll used the peculiar term theory-nexus to refine his view of the usability claims embodied in an interface because one should not just view an interface as a list of separate usability principles: All the usability claims in an interface depend on each other in complex artifacts. We can view artifacts as the media for demonstrating our understanding of HCI. Therefore artifacts are the appropriate focus for theoretical work in HCI, and since they are the way the practice of HCI works anyway they could lead to a tighter integration between theory and practice.

Another interesting theoretical development is the steady progress of Clayton Lewis from the University of Colorado on explaining how users make inferences and predictions about interactive systems (alluded to above under the discussion of interface Agents). This progress is somewhat slower than I would like since I see Lewis' inference principles as a possible practical design aid. But at least the progress is steady. This year, Lewis had done some empirical testing and asked users to choose between various interpretations of a set of dialogues. The result was that people confirmed that in theory several interpretations were possible but that they preferred the interpretation predicted by Lewis' inference heuristics. So it does seem to be reasonable to try to design systems which follow the principles - we now have a formalization of the principle of least astonishment.

One concrete result from Lewis' study was that aliasing (having several names for the same thing) makes it harder for users to generalize since they void one of his heuristics. Therefore the mere presence of an aliasing possibility in an interface would have a larger impact on the usability of the entire interface than would be indicated by a rule-oriented analysis where the multiple names would only impact the rules for those parts of the interface where they were actually used.

Things I Missed

You always miss some things at conferences but CHI conferences are worse than most in this regard. And CHI'89 was so overloaded with interesting events that choosing what to do at any given time was as frustrating as selecting the main course at a restaurant with three stars in the Michelin Guide . There is so much you have to miss. I worked every day during the conference from various committee and journal editorial board breakfasts at 7:30 in the morning to in-house videos after midnight, but still missed a lot. The conference video tapes were shown 24 hours a day on the hotel TV-system but I never got to see them all anyway. I had hoped to catch up on the videos after the closing session of the conference but unfortunately the video system was shut down by mistake as soon as the conference was officially closed. The original plan was to keep playing the videos on the hotel system for about one more day since many people do not leave the moment the conference closes. Actually, for future conferences it should be considered to also start the in-house video a few days before the conference to the benefit of the many people who arrive early for tutorials, workshops, or other events. I would love to have something intellectual to watch on TV when I wake up at 4 AM the first few days because of the time difference when coming from Europe.

Wendy Mackay, Thomas Malone, and several other authors had a paper with some empirical evidence on how people use the knowledge-based rules in the Information Lens. I have waited anxiously for such a paper since I first heard Malone's presentation of the Information Lens at CHI'85 because I have always been sceptical about the abilities of ordinary users to write the expert system-like rules used by the Information Lens to sort the incoming electronic mail. Well, the advantage of papers are that it is possible to read them later even if you miss hearing the presentation, so I read this one when I returned home.

I have always liked the Information Lens and wanted to have one for myself (if it had been available for the strange combination of a Macintosh and an IBM VM mainframe I use for email) but I have been less optimistic about its use by people with less programming ability. It turns out from Mackay et al.'s study that people are indeed able to write rules themselves even if they are not programmers. The only qualifying caution is that the study was conducted at an unnamed research laboratory sounding suspiciously like Xerox PARC which is a collection of people with above-average abilities and interest in new technology, even outside the computer science department.

These users created rules of at least medium complexity, such as "if this message is from person NN and has foobar as its subject, then delete it." Such rules with more than one condition were more common for deletion actions than for classification actions, possibly because it is more dangerous to delete an incoming email message. Sometimes the rules can be pretty strange. One user had a rule which gave higher priority to messages with the string "BITNET" in the from: field because they were normally from overseas colleagues. Since I send my email from the Bitnet, I can only applaud such a rule but it does give rise to some potential touchy social issues where the name of the network or domain you are on becomes important for whether you are heard.

One good aspect of the Mackay et al. paper is that it documents the real life use of a system which design and underlying concepts have been highly visible in the research literature over the last five years. Another positive aspect is that the study was conducted longitudinally over a period of 18 months. For systems such as advanced electronic mail it is much more important how people use them over an extended period of time than how they use them initially. It would also be interesting to see how the use changed over time but that is unfortunately not discussed in the paper.

Conference Ergonomics

I have already mentioned that Claudia Raun's conference program was extremely user friendly. The other traditional gripe is the visual aids used by the speakers but they were mostly fine. One person did use overheads with what seemed like a 12 point font, and a joint presentation by two speakers zipped through a large pile of slides much too quickly for anybody to read them.

For my own presentation I had prepared an overhead with a 36 point font because of the huge audience in the plenary session. During the conference I wanted to produce an additional foil but the speakers' prep room did not have a computer system. Luckily a student volunteer helped me check the various administrative offices and finally found a system with a laserprinter. Now my only problem was that the word processor on that computer would not produce text larger than 24 point. But the hard disk also contained Microsoft Excel which was able to print large fonts. So my final overhead was produced in a spreadsheet....

The name badges were fine and readable. The lights in the lecture rooms almost always stayed up enough to allow the audience to stay awake and take notes. So it almost seems like the conference ran so smoothly that there is no fun in being a critic. The only bad event was the industrial tour to various presumably interesting companies in Austin. Some companies had set up nice tours of their facilities but many just kept us in a single lecture room for a standard lecture or video tape about their company. A minimum requirement for an industrial tour must be that the the participants are shown some actual computer equipment and laboratory facilities since there is otherwise no reason to ride a bus for hours: the prize for best tour goes to Ozz Research , a small company working in interactive video, CD-ROMs, and hypertext.


Share this article: Twitter | LinkedIn | Google+ | Email