CSCW'86 Trip Report

by Jakob Nielsen on December 31, 1986

Austin, TX, 3-5 December 1986
 

At CHI'86 it was a general impression that the area of several people working together using computers were starting to become the next hot topic as a research area. At the same time the European Community is considering changing the focus of the human factors part of the next ESPRIT research program from the so-called human factors-1 (interaction between a single human and a single computer) to human factors-2 (interaction between several humans and several computers).

On the basis of this atmosphere of mounting excitement, I participated in the conference on Computer-Supported Cooperative Work (abbreviated CSCW in the rest of this report) in Austin, Texas 3-5 December 1986. The conference was sponsored by MCC, the Microelectronics and Computer Technology Corporation which is a commercial research center established as a collaborative effort (!) by several large US companies to avoid the fate of being overtaken by the Japanese Fifth Generation project.

Unfortunately MCC had not organized an "open house" event as was done by ICOT at the Fifth Generation conference in Tokyo in 1984. But anyway I got the impression that MCC by now has achieved the status of a dynamic research environment producing novel results. Frankly, I had not been that impressed with MCC's results before attending this conference, but I just guess that it takes some time to get up to speed for a new research center (Rome was not built in one day...).

The 293 conference came mostly from the United States, even though there was significant participation from other parts of the world also (especially from Europe). The European participants were mainly from Scandinavia, the United Kingdom, and Italy, while the Asiatic participants were mainly from Japan. The distribution of participants' affiliation showed a very large number of participants from MCC (the company organizing the conference) and a rather large number from Xerox, while IBM only had a few participants at this conference even though it is normally strongly represented at human factors conferences.

My general impression of the conference was that there was a pervasive "pioneering spirit": Everybody was very excited about participating in this new field. However, it must also be admitted that the conference showed that CSCW is not a very "cultivated" field at the moment. Most results presented at the conference were preliminary in one sense or another. Also, in some sense the field is not that new: As the program chair, Irene Greif, noted in her opening remark, Doug Engelbart's augmentation project 20 years ago had included a lot of early work on cooperative work. Some of his inventions such as windowing systems and the mouse had "made it" out to the rest of the world while other ideas such as the computerized conference room had not.

In many cases, people had built neat systems and showed very impressing videotapes, but they did not know whether their systems in fact supported (i.e. helped/facilitated) cooperative work, as the systems had not been used for such work (or at best had been used by the system's authors themselves in their own project meetings). In other cases, people did present empirical studies of real subjects working together, but it was often difficult to relate those results to how new potential computer systems could improve the situation.

I do not intend to sound negative: There was a lot of interesting and good work presented, it was just not conclusive-which is a very natural situation at the first big conference in a new field.

The next CSCW conference is planned for September 1988 with Irene Greif from MIT as conference chair and Lucy Suchman from Xerox as program chair. One experienced conference organizer told me that she feared that the second conference would be at the opposite extreme from this conference: A lot of (too) detailed empirical evidence, but no excitement about improving the world. And then one might hope for a synthesis at the third conference.

Definition of CSCW

In his opening remarks, the conference chair, Herb Krasner from MCC gave a brief overview of the spectrum of different types of CSCW:

  • Different kinds of work being performed by the group:
    • authoring
    • research
    • design
    • office work
  • Different modes of cooperation:
    • size of group
    • organization of group
    • level of synergy (are they just sharing interests or working on the same project?)
    • communication patterns
    • time (real-time or time-delayed communication)
    • space constraints on interaction between group members (there is e.g. a great difference between supporting people who are physically present in the same room and supporting people in different locations).
  • Types of computer support:
    • passive/active.
    • intelligence.
    • level of aid offered.

Many different types of computer systems were talked about at the conference:

  • Simple electronic mail, perhaps augmented by some kind of structure (e.g. The Coordinator) or filter to avoid information overload (e.g. Malone's Information Lens).
  • Computer conferencing systems.
  • System prototyping tools to help designers collaborate (e.g. Trillium from Xerox).
  • Electronic meeting rooms to support face-to-face meetings of different kinds-e.g. brainstorming sessions (Colab from Xerox), project review meetings (NICK from MCC), or group decision making (Plexsys from Harvard).
  • Construction/authoring of a coherent set of overheads for a talk (Cognoter from Xerox).
  • Information sharing among research collaborators (NoteCards from Xerox).

When Is CSCW No Good?

We should of course realize that computer-support is not necessarily always good for collaboration. Ted Nelson claimed that a good feature of non-bureaucratic organizations is the deniability for everyone: You are not controlled too much-i.e. the computer does not record everything. He asked whether we risked creating just the kind of non-deniable organizations where we as scientists would not want to work.

Julian Orr from Xerox PARC told of his studies of "narratives at work": It turns out that Xerox repair technicians tell each other a lot of "war stories" in the form of anecdotes of difficult repair situations. This has been discouraged as a waste of time by some managers, but these anecdotes are actually useful for the other techs who take these experiences and try to apply them to solving their present problem. The point is that having a casual attitude is important to telling war stories, so that it is doubtful whether one could introduce a computerized system to support the distribution and use of these anecdotes.

Robert Kraut from Bellcore had studied how scientists collaborate. His conclusions were that the really hard problems are the interpersonal relationships-and it is not clear that computers can help here. Scientists working together really have two major goals: 1) Accomplishing what they set out to do. 2) Keeping a good relationship. In some cases these collaborations take the form of partnerships over 30 years. The central problem (and the major source of complaints in 50 interviews with scientists) is equitable division of labor.

The Computerized Meeting

Many different systems for computer-filled meeting rooms were presented at the conference. In most cases there was a workstation for each meeting participant and in some cases there was also a wall-sized "electronic blackboard" projection screen. The problem with these electronic meeting rooms was that they ended up looking like old-fashioned university terminal rooms instead of a meeting room. For instance it would often be the case that the physical size of a Lisp-machine would make it impossible or inconvenient to look other meeting participants in the eye.

Actually one of the conference participants came from one of the major U.S. manufacturers of office furniture. I talked with him during one of the breaks and learned that they are trying to design furniture and computers to fit each other so that the end result would be ergonomic environments (and not just ergonomic individual components). Maybe they will succeed but at the moment you have to be a computer freak to enjoy taking part in a meeting in the electronic meeting rooms.

Project NICK from MCC was presented by Clarence ("Skip") Ellis. They used both personal computers for meeting participants and an electronic blackboard which was controlled by a special person: The facilitator who is not part of the group having the meeting but who tries to keep the meeting on the right track. The blackboard can show a (dynamically changing) agenda and several special meters: Yes/No meters of participant agreement with proposals and "mood" meters showing whether the audience thinks that the speaker is boring/presenting great stuff, etc. Some of these ideas sound interesting, but I am not sure how I would like to be the speaker greeted with a mood meter reading 100% "boring."

There was no single paper explicitly describing the Xerox PARC Colab; but several papers referred to this exciting environment. It is an experimental meeting room designed for small working groups of two to six people using Xerox workstations connected over a network. Colab is intended to bring the (hoped for) benefits of office automation to people in the situation when they are in meetings away from their office (indeed, office workers may spend 30-70% of their time in meetings).

Gregg Foster talked about the Cognoter system which is one of the tools available at the Colab. It is a meeting tool for structuring and preparing presentations (i.e. talks to other people than the meeting participants) using a 3-phase brainstorm model:

  1. Uncritical generation of ideas for things to include in the presentation
  2. Organization of the ideas to a linear order
  3. Pruning ideas and transferring them to text. This kind of tool may of course also be used as an idea processor by a single person and actually it had been used more as such than as a meeting tool.

Foster related some experience from the use of Cognoter as a collaboration tool: There seemed to be an ebb and flow of parallelism (5-10 minutes cycles) where people sometimes worked together on the same information and sometimes worked individually on refining different pieces of the total picture. They had the problem of cluttered screens when several people contended for the same "real estate" and they also had to deal with the traditional concurrency problem of multiple accesses to the same data: They just locked objects with a busy signal-this was not a problem as the grain size of the objects of information was small and each interaction tended to be short.

They had performed an experiment comparing generation of presentations on paper and using Cognoter. The results were judged to be about equal in quality but the subjects using Cognoter had had to spend most of their time just learning the system and there is thus hope that Cognoter could result in actual improvements for more experienced users.

An interesting side effect of the Colab was that it has a video switch allowing any display to be slaved to another so that it will show exactly the same. This facility was put in to enable demonstrations but it turned out that it was actually used by the users of Cognoter to "look each other over the shoulder."

The situation of users seeing exactly the same as other users in Colab was discussed in more detail by Mark Stefik in a paper on the so-called WYSIWIS (= What You See Is What I See; though a member of the audience comment ed: Nobody ever sees exactly what I see. It is not just a question of the pixels displayed but also of how you look at them.). They started out having this feature but gradually found that it was not so desirable. Stefik defined strict WYSIWIS as having all wiews identical-including having all displays show all cursors: But it turned out to be too distracting with all the cursors zipping around so they went to tele-pointing instead: A user could request a special large tele-cursor which would then (and only then) be displayed on the other screens.

And again they found the real estate problem if the same size display had to be shared between several users at the same time. So instead they had tended to move to systems where they relaxed WYSIWIS to only apply to individual windows (and not to entire display screens). The new principle would be that if two people were seeing the same window (defined in some way) then they would also be seeing the same window contents. In addition to these "public" windows, the system also included "private" windows like notepads etc. which were not shared among users.

Two ways of solving the screen real estate problem with the new relaxed WYSIWIS are:

Stampsheets
 
Some windows are shrunk to icons ("stamps"). Only the windows in which the individual user has an interest at any given point in time are expanded to full size. Subgroups of users will then be implicit in the set of windows open at any one time. People who have approximately the same windows open will belong to the same subgroup since they will tend to be working together on the same problem.
Rooms
 
A room is a kind of "super-window" that takes up the entire display and contains a collection of ordinary windows. The idea is that you organize different rooms for different purposes (just like the rooms in a building) and then have "doors" (icons) in each room leading to other rooms. Subgroups are formed by having one room for each subgroup (since a user can only have one room displayed at any one time, the room displayed defines the subgroup to which this user belongs and the room approach tends to foster firmer subgroups than the stampsheet approach).

Long-Distance Coffee Breaks

The systems for computerized meetings described above might be helpful in the situation where people are able to get together physically in the same location. But another approach is needed in the situation where the group is distributed across several geographical locations.

Xerox has established a second site (PARC Northwest) of the PARC laboratory in Portland, Oregon which is more than 1,000 kilometers from the main PARC site in Palo Alto, California. George Goodman explained how they are trying to make sure that the NW people remain in tight collaboration with the PA people. First of all, they have established powerful electronic links between the two sites: Both audio, video, and an Ethernet extension, all running 24 hours/day. Video cameras and monitors are present in all offices (of course there has to be some convention of "knocking on the door" before establishing a video link to another office!). They have also tried to make the two sites look the same in the use of furniture etc.

The most interesting aspect of this video link is perhaps that they keep a constant video link open between a "common area" at each site. This means that people who just drop by to get a cup of coffee will see those of their colleagues who happen to be doing the same thing at the opposite site. And this will again encourage informal conversations which would not occur if the same people had had to establish an explicit video link between their respective offices. Ben Shneiderman commented that he was looking forward to systems offering other kinds of cooperative experiences besides just work, such as cooperative entertainment and play. Some such enjoyable opportunities have been available for a long time on e.g. the PLATO system where people in different locations can participate real time in the same game. Shneiderman said that about 10 % of the time in PLATO had been spent playing the cooperative game Empire.

The use of the video link has mostly been for brief conversations: 70% of all interactions lasted less than 5 minutes and 20% lasted between 5 and 30 minutes. The users do get a feeling of physical presence ("as if people were down the hall") because of the system but they still think that the overhead of starting a communication across sites is too high.

Even though there is a long distance between Portland and Palo Alto, it is a North-South one so the two cities are still in the same time zone. Now that Xerox is starting an EuroPARC laboratory in Cambridge, England under the leadership of Tom Moran, they will face much greater difficulties in establishing the same kind of collaboration in spite of the 8 hour time difference between California and Britain.

(Note added 1988: In fact, PARC Northwest was closed shortly before the CSCW'88 conference.)

The Low Road

Ben Shneiderman defined most of the systems presented at the conference as belonging to a "high road" approach requiring specialized, expensive equipment (the Xerox PARC system described above for video links between Palo Alto and Portland had a set-up cost of $ 60,000 per site for video compression and an additional $ 1,000 per office. And after that, renting the communication line costs $ 3,000 each month.). But some of the systems belonged to the "low road" approach, running on cheap standard hardware and software. Many electronic mail systems belonged to this class.

One really low road approach was presented by Tony Fanning and Bert Raphael from Hewlett-Packard. They had 2,000 people using a computer conferencing system via text-only terminals and phone-based X.25 access. One reason for not going to a more fancy system is that the pre-existing environment is hard to move at HP: They are 80,000 people at 100+ organizational units. So instead of trying to move the mountain they had chosen the simple goal of just getting people together and talk. One problem they hoped to alleviate was the fairly standard situation of a "council" (working group in charge of some problem): Normally it will meet once a month, having several people fly in from far away for a few hours of meeting. And nothing much would get done at the individual meetings.

So they tried to introduce an established conferencing system - CONFER developed at the University of Michigan. CONFER had been used successfully in many situations:

  • Task force efforts to achieve a specific goal - discussed on the system.
  • Establishing contact between "lonely" people in common disciplines at scattered sites (e.g. 15 computer center managers at HP sites in East Asia had a conference). This gave rise to ongoing conferences rather than having a single goal as the task force conferences.
  • Broad discussions of e.g. tips for using PCs. The problem here was to entice people to log on every day (often people logged on for the "entertainment value" of seeing what X had replied to Y's comment the day before). It was also a question of reaching a critical mass of conference participants to keep it moving. 20 active participants seemed to be the minimum.

The primary problem was that of access: Modems etc. were simply too difficult (and if this is the case in a technologically advanced company as HP, imagine what the situation must be in other places). Conferences also were not very successful if the participants were selected by someone else to participate instead of volunteering.

Bonnie Johnson from Aetna Insurance told of yet another case of actual field use in a commercial company. In this case it was the Coordinator System which is a commercial available product. The basic idea was that communication constitutes collaboration, and therefore to support communication. The Coordinator is really an electronic mail system with some added features. The sender of a message can e.g. set a date for the expected reply to the message or completion of the action described in the message. And the system will then automatically keep track of these dates.

It seemed that The Coordinator was not really helpful in the idea generation phase ("it is a pain to type in all your ideas"). It helped somewhat in the evaluation phase for those people who had also used the system to generate those ideas being evaluated-but it did not help those people who had talked face-to-face when generating the ideas and only afterwards typed in a summary. And the system was very helpful in the action phase (where the ideas are carried out)-probably because project management is built into the messages.

Electronic mail was also the subject of a study by Martha Feldman from the University of Michigan. People have a lot of weak ties with other people with whom they do not work really closely but with whom they could still have communication (e.g., ties connecting groups which are themselves strongly tied internally). She studied a company which had had electronic mail for 11 years (and thus presumably had grown used to it). In a set of 1,249 messages, 574 were between people who did not know each other (and they estimated that 82% of these messages would not have been sent without electronic mail). 155 of the messages were between people who only communicate electronically.

At Rand their system had by accident for 1.5 years kept on collecting the headers of all messages sent in a log file. So now they had the following data for 69,000 messages: To, From, Copies, and Date/time-but not the subject. For privacy reasons they had recoded all the userids to random IDs (but they had kept information about organizational status, department, and geographical office location, thus probably compromising privacy anyway for a determined analyst). J.D. Eveland.D.; discussed a number of results on the basis of this large amount of fairly shallow data. It turned out that 20% of the senders generated 80% of the traffic. Most messages were sent between people who were geographically close to each other, even though you might have thought that electronic mail was especially useful for sending messages across long distances. It alto turned out that most messages were sent within project groups and not across project groups. In summary, mail usage tended to follow strong ties and not weak ties.

Eveland also presented "sociograms" of the closeness of different department within Rand on the basis of their cross-department mail usage. The specific sociograms are probably mostly of interest for Rand insiders but the principle of generating them on the basis of an extensive message log is probably of general validity.

Hypertext

Hypertext is the generalized footnote. The reader may have observed that I really like this non-sequential style of writing, and I have actually for a long time been interested in the hypertext concept. A really fine overview of current hypertext thinking and systems was distributed at the conference in the form of a report by Jeff Conklin (it was not presented as a talk and it is not part of the proceedings).

Mayer Schwartz from Tektronix defined Hypertext as follows: A document consists of a set of nodes and directed links between these nodes. Each node can be text, graphics, animations, etc. Hypertext is meant to be viewed and accessed interactively on a computer (i.e. a printout is not hypertext) and it should be possible for multiple people to access the document concurrently. Also he wanted some kind of version control in the system so that it would be possible to update some nodes and not others and having the system keep track of the status of the overall system (including perhaps keeping some of the old versions of the nodes). Version control would be especially important for software engineering use of hypertext as in their Neptune system which among other things is used to provide source code management. It is also used for a network news browser allowing annotations.

NoteCards is a system from Xerox PARC which has been mostly used for individual work. It is a hypertext system in that each "card" may contain pointers to other cards. It is not a browsing system as many other hypertext systems where the user basically reads existing information. Instead it is an idea structuring system where the user is supposed to generate new cards and link them to an evolving structure. Randy Trigg discussed the (limited) experience with using NoteCards for collaborative work where two users were generating and editing cards in the same document data base. They introduced special history cards which recorded the changes made during a session with the system so that the other user later could see what had been done. They also adopted a convention where each of the two users used a different typefont when entering text into the cards. In other cases, they decided not to have any convention-e.g., for reshaping windows. Other conventions are rather primitive such as the "message box" which is just a special card which they use to post messages to each other.

These kinds of conventions result from procedural activities which are discussions about how to proceed with the work. The two other kinds of activities considered by Trigg were substantive (those that constitute the work itself) and annotative (about the work, including comments, critiques, and questions). Substantive activities are supported by many systems (even to some degree by a traditional text processor supplemented by printouts or electronic mail). Annotative activities lack support in traditional systems, but more and more systems include some possibility for having annotations kept separate from the text itself (and of course annotations are a basic feature of hypertext systems).

One problem with using NoteCards for cooperation is that it requires too early commitments in the form of how to make up cards, which title to use for things, etc.

Many of the talks described systems for advanced non-linear structuring of ideas but were presented by a human speaker who of necessity had to linearize them. Norman Meyrowitz from Brown University noted this problem and said that he had actually designed his talk in his hypertext system, Intermedia , and was unsure whether he himself was a good linearizer of it. Intermedia is intended for use in education and research. Typical applications are in English literature where the system will contain a number of literary works and in biology where the data are digitized microscope slides. In addition to the basic data which is shared among several students and/or researchers it is possible to impose a set of links called a web on the documents. Several webs may exist for the same underlying data-e.g. one for each student who build his/her own understanding of the relationships among the data. It should also be possible to start with a web supplied by the professor and then have each student work on refining it. A problem now exists when the underlying data changes (perhaps because the professor inserts new material). What will happen to the links constructed by the students? In some cases they can easily be updated because they link two unchanged parts of the document (there might be some work to do for the computer but it is at least conceptually easy to determine what must be done). But if the change is made to a so-called anchor, it is more difficult to determine what to do, and Meyrowitz discussed different strategies.

Shifts in Research Perspective (John Seely Brown)

John Seely Brown who is vice president of advanced research at Xerox PARC gave the invited speech ending the conference. He warned against the trap of building tools that support exactly the collaborative style that you yourself (or your group) like. This would be like the problems with some early AI work trying to automate the researcher's own cognitive style.

Seely Brown stressed the occurrence of watershed events in research (shifts of paradigm/perspective). One such shift was implied at the conference: The shift of focus from personal computing to interpersonal cooperation and computing.

He listed four important shifts in research perspective which had caused us to be aware of totally new points which would have been missed by researchers working from the old perspectives:

  1. User Interfaces: Traditionally one had designed for the absence of trouble by users (sometimes known as "idiot-proof systems"), but now we should design for management of trouble (i.e. systems where the users can easily correct errors on the fly) because trouble will occur-novice users will always make some mistakes even in the best system.
  2. Office procedures: Originally procedures have been seen as plans (or even recipes) for office work. But now they are seen as descriptions of goals.
  3. Language: First it was seen as descriptions - now it is seen as action-a process.
  4. Narratives: At first they were viewed as anecdotes or "war stories" which people were telling for their own pleasure (and which were a waste of time from a management perspective). But as shown by Julian Orr, they can now be viewed as information carriers having powerful potential for transmitting knowledge in a community.

Seely Brown emphasized that it is difficult for researchers to observe the evidence for the new perspective before they have already identified the new perspective. I.e. researchers who do not think of narratives as information carriers may study how much time repair people spend talking and how it affects their informal relationships. But they will not find out how the stories later on help the technicians when they are in the field making repairs.

Other Papers

A large number of additional papers were presented at the conference and I cannot cover them all. Just a few comments on some additional papers:

Fred Lakin from Stanford had measured his graphic agility in his vmacs (visual emacs) which is a system to generate visual presentations: The slowdown factor from drawing on paper to drawing with vmacs was a factor six for purely graphic images and a factor three for images with a high content of text (which is fast to type). The advantage of the computer system is of course that once the images are generated, they are easier to manipulate. He was also testing a gesture recognition system to buy him agility at construction time: Sketch first and then let the system clean up the image later.

Tom Malone from MIT presented the Information Lens intended to help users of electronic mail to cope with information overload. The general principle was to get messages into a semi-structured form (based on templates for different purposes, e.g. for announcing meetings) and then have the system prioritize messages on the basis of a knowledge representation of user preferences. In this system it was possible to address a message "to: anyone" and then have an (AI) intelligent postmaster compare the message to the interest profiles and forward it to the individual users. The problem with this approach is that user groups will have to be able to define their own new templates based on their specific needs and (perhaps more problematic) that individual users will have to be able to write down the production rules to enable the system to prioritize their messages. There are facilities to help users in this process (the user can e.g. see which rules fired on a given message) but I am still not sure that the average business professional would like to perform what really amounts to AI programming and debugging.

One of the last of the papers presented at the conference was a survey paper of group decision support systems and is as such recommended for additional overview information and 119 literature references. It was presented by John King from the University of California, Irvine. The conclusions were that there is a lot more talk than action in this field: There has not been a surge in the number of sites really active in research and there are essentially no commercially viable systems. But on the other hand people are still working and there are "good vibes in the field."

Other Comments

In New York I saw an exhibition at the International Center for Photography of photographs by David Seymour. I did not know this, but it turns out that he was Ben Shneiderman's uncle and a famous photographer in the period around the Second World War. So now we know how Ben got the impetus to become the semi-official photographer of the CHI community.

Anyway, the museum had an installation of Shneiderman's TIES-hypertext system explaining the history of Seymour and his photographs. I had seen TIES during my visit to the University of Maryland a few months earlier and I of course took this unexpected opportunity to observe field use of the system. It seemed that museum-goers in general were able to use the system without too much trouble, but there was one interesting glitch. Instead of operating the computer through a regular keyboard, users faced a special layout with only very few buttons: The four arrow-keys used to select pointers on the display, and an enter-key used to perform the selection. The problem was that the legend for the enter-key had been worn down by the many users and was illegible. Since the (very minimalist) instructions for the system referred to this enter-key by name, it led to some trouble that it was not clearly marked. This fairly trivial practical problem of the durability of the inscription probably meant that the learning time for this otherwise well crafted system was almost doubled. Once again: Laboratory usability and field usability are not identical.


Share this article: Twitter | LinkedIn | Google+ | Email