Hypertext'87 Trip Report

by Jakob Nielsen on April 1, 1988

Note added 1995: This document was witten in 1987 and reflects the state of the art and my thinking at that time. Definitely a pre-Web report! I am making it available again to allow you to reflect on the extent to which the issues facing the WWW are similar to or different from the issues discussed in 1987.


The report was originally published in the ACM SIGCHI Bulletin 19, 4 (April 1988), pp. 27-35.

For more thorough coverage of hypertext, including the work at Hypertext'87 as well as both earlier and later research, please see my book Multimedia and Hypertext: The Internet and Beyond.

HyperTEXT'87 was the first large-scale meeting devoted to the hypertext concept. Before the workshop, hypertext had been considered a somewhat esoteric concept of interest to a few fanatics only. So just in case some readers don't know what hypertext is, let me start by defining it:

Definition

"Hypertext" is non-sequentially linked pieces of text or other information. If the focus of such a system or document is on non-textual types(1) of information, the term hypermedia is often used instead. In traditional printed documents, practically the only such link supported is the footnote, so hypertext is often referred to as "the generalized footnote."

The things which we can link to or from are called nodes, and the whole system will form a network of nodes interconnected with links. Links may be typed and/or have attributes, and they may be one or bi-directional. The user accesses the information in the nodes by navigating the links. Frank Halasz would add to the definition that this navigation should be aided by a structural overview (which, say, HyperCard does not have!).

Hypertext has seen considerable interest recently because of some new commercial hypertext products such as Guide from OWL and HyperCard from Apple running on popular personal computers such as Macintosh and IBM. When the workshop was originally planned in the fall of 1986, the planners were not sure whether they would be able to get a large enough set of papers and participants. But they had ended up having 500 people compete for the 200 seats at the workshop. At the end of the workshop, Ted Nelson compared it to the first SIGGRAPH conference on interactive computer graphics. It had also had a small and cosy feel but now it was a gigantic conference with 30,000 people. Nelson felt that hypertext would also grow to be an industry.

Classifying Hypertext Systems (Frank Halasz)

Frank Halasz from MCC gave the last talk at the workshop. He and the organizing committee should be criticized for not making it the first talk and the last talk: Part of the talk was a very good survey of what hypertext really is and a classification of current systems. This material could have filled a whole talk with no problems but was presented with such speed that it left the audience breathless. It would also have made a good platform for the discussions during the conference if it had been presented at the beginning instead of at the end.

Hypertext systems can be divided into on the one hand the "original" generation of Memex [Vannevar Bush], NLS/Augment [Engelbart], Xanadu [Ted Nelson], etc. and on the other hand the "current" generation consisting of e.g.

  • Research systems: Intermedia [Brown University], NoteCards [Xerox]
  • PC Product systems: Guide [OWL], HyperCard [Apple]
  • Workstation products: Document Examiner [Symbolics].

Further dimensions for classifying hypertext systems are:

  • Scope of the user target
    • Single user: Guide, HyperCard, the original NoteCards
    • Work group/CSCW: Intermedia, NoteCards recent version
    • Corporate division: Augment, ZOG [Carnegie Mellon]
    • Whole World: Xanadu
  • Browsing vs. authoring
    • Focus on information presentation: Document Examiner, Hyperties [Shneiderman]
    • Focus on network creation and manipulation: Augment, Neptune [Tektronix], NoteCards
  • Task specificity
    • General: Guide, HyperCard, Neptune, ZOG
    • Some task inclination: Augment, NoteCards, Intermedia
    • Task specific: Document Examiner

The second part of Halasz' talk was also filled with enough material to be worth a full talk. It discussed the unresolved issues in hypertext research. This was a good way to end the workshop with a look towards the future. Indeed, several people had commented that we would soon stop having special hypertext conferences as it would cease to be an interesting problem in itself. This is the traditional argument against computer science: "But we don't have a typewriter science or international conferences on pencil design, do we?" Halasz showed that there are enough unresolved problems to keep research going for some time yet.

Halasz' seven issues were:

  1. How to do search and query. Basically, hypertext is navigation and the current methods for navigating the information space works fine for small spaces up to about 2000 nodes. But navigation breaks down as a search method in large or unfamiliar networks. It is not good enough if you have to click at 500 links to get at what you want, and the risk of going down the garden path grows too large. The solution should be some kind of query-based retrieval of nodes to complement navigation. We would like to have a content search where each node or link acts as an entity. A simple database model would just do queries on the basis of the content of the entities without taking advantage of the network structure of the hypertext, while a more advanced structure search on the other hand would do so.
  2. Composition of the basic nodes and links to higher structures. Halasz viewed this as the missing concept in current hypertext systems. We have no mechanisms for treating structures of nodes and links (subnets) as structures (2) in their own rights. Many systems do add ad hoc mechanisms such as the file boxes in NoteCards which may be used for hierarchically nesting nodes but the real solution would be to add composition to the basic hypertext model on par with the nodes and links. An issue in doing so would be whether a node could be included in several composites and if so, whether links to that node refer to the node per se or to the node as located in a specific composite?
  3. Virtual structures are a concept intended to deal with the fact that the users' conceptual structures tend to change faster than their hypertext representations. Current experience with NoteCards suggests the presence of the premature organization problem where users organize their knowledge in hypertext structures on the basis of an early understanding of the problem. When their view of the domain changes, it involves a lot of work to reorganize the hypertext structures. Instead, virtual structures would allow nodes, links, and compositions to be specified by descriptions called by name (instead of hardwired). It would then be possible to change the definitions of those descriptions and have the structure of the hypertext change accordingly. Some support for this concept already exists, e.g. in commands as "return to previous node." which are virtual in that their actual meaning changes according to the user's present access of the information space.
  4. Using computation to change hypertext networks from being passive to being active. Current systems do not actively process the information included in them to guide the user's navigation. Examples which do exist include the use of NoteCards together with a computational engine to drive a computer-aided-instruction system and the HyperTalk language in HyperCard.
  5. Versioning. Some hypertext systems such as Neptune and Intermedia do include some support for different versions of their information space. This is of course especially important in the use of hypertext to support software development but it could also see more use in traditional hypertext applications.
  6. Support of collaborative work CSCW, which is the topic of a separate series of Computer-Supported Cooperative Work conferences. (3) Most current hypertext systems are inherently single user applications. If the case of concurrent multi-user access to the same hypertext network we would face the traditional data base problems of locking and notification as well as more human factors issues such as maintaining mutual intelligibility of a network being changed by several users.
  7. Extensibility and tailorability. Current hypertext systems tend to be suitable for a given range of tasks and style of use and to require additional programming and considerable expertise for any other use. Solutions could include being able to create new behaviors by-example, but it is in general not clear how we can provide users with the option to change user interfaces to their taste and needs.

In summary, Halasz felt that items 1-3 on his list were the most important. In the long term he would assume that hypertext would become a standard inside computer systems and that all information in computers would become network/link based.

 

Is Text Data ? (Andy van Dam)

Andy van Dam (Brown University) gave the opening speech giving his perspectives on hypertext from a historical point of view. They had invented the first hypertext system at Brown in 1967 -- i.e. 20 years(4) ago. The University of North Carolina had been the first place outside Brown itself to use the system so this made it fitting to have this workshop at UNC. First, van Dam wanted to acknowledge two trailblazers in the hypertext field, Doug Engelbart and Ted Nelson (see the next sections). Engelbart invented outlining/idea processing and did office automation before the word even existed, while Nelson coined the word "hypertext" itself and tried to put some early discipline into the links and associations in hypertext with his concept of gradually expanding stretchtext.

The original hypertext editing system ran in an 128 K machine timeshared with other users on a computer which was slower than a Macintosh. The original sponsor was IBM but they could not tell them at that time that they were using the expensive graphics terminals just for displaying words. Later IBM did accept the idea of displaying words interactively on a graphics terminal and sold the system to NASA which used if for writing documentation for Apollo. (5) In the late 1960s van Dam et al. also made the rounds to magazines such as Time to try to get them interested in hypertext systems but it was beyond their imagination.

A later project at Brown was FRESS which was intended to be device independent of I/O media and not to have size limitations on anything.(6) Additionally, they would not have the size of something impact its performance. The system included bidirectional links: Not just goto, but also come from. To the best of van Dam's knowledge, FRESS was also the first system to have undo. van Dam really wanted unlimited undo, since one level is not enough (but it is a lot better than zero). For some time, they worked on text evolution: Recording all the changes to all parts if the text so that the complete editing history would be documented. But the complexity of doing so made them drop it.

During this period, van Dam had some serious arguments with the vice president of the university about whether their programs should be allowed on the computer. If it was there, people would use it which would subvert the real purpose of computing according to the VP: to produce numbers not words. One more example of the limited foresight of people in charge. You can always get research grants to work on yesterday's problems and sometimes even today's problems. But visionaries have a hard time(7) as also pointed out by Engelbart in his talk.

As a comment to this, Jef Raskin who used to be at Apple, said that he had wanted to include lover case letters in the design of the Apple II but had been told that "we only need upper case on personal computers as they will only be used for playing games and writing BASIC programs."

In spite of his problems, van Dam continued his research on online text systems and at some time had his system used for a poetry(8) course. The students would sign up for an hour on the one and only graphics workstation to read poems and write their own interpretations, comments, and annotations. Afterwards the students would use the common database of comments to compare notes and end up with a richer structure than their individual work.

On the basis of his extensive work on online text systems and hypertext during 20 years, van Dam presented a list of 8 key areas to look more closely at:

  1. The glue (infrastructure) in hypertext may be good, but what should be at the nodes -- one could have not just text but e.g. three dimensional animation under user control. This is wide open.
  2. The "docuverse" [a Ted Nelson-word] is the most interesting, but we are building "docu-islands" in the form of isolated, not cross-linked information structures. Systems are closed, incompatible, and without the possibility for data transfer. Instead linking information between systems should be part of an open system conforming to a standard.(9)
  3. Since only toy systems have been built, we do not know if the hypertext principles currently in use will scale up to handle real world documentation systems. As an example, the number of pages needed to document a fighter aircraft has grown practically exponentially from one war to the next as shown in the following graph:

     

    number of pages documentation per generation of fighters

     

  4. The navigation problem of getting lost(10) in hyperspace. Hypertext gives us a goto link which we know from software engineering gives "spaghetti." van Dam noted that it could be that we have also discovered the equivalent of if-then-else in the form of hierarchies, but we also need new forms of flow of control in structures that users recognize.
  5. We also need a notion of semantics of nodes instead of just pure syntax, so the system needs to know what is in it (AI needed).
  6. We would want wall-sized displays as well as portable displays.
  7. We need to handle the socio-economical problems such as intellectual property rights. At the moment traditional copyright laws(11) don't know how to handle electronic documents which may partly include or reference other documents, and the politicians certainly don't know either so there is not hope of an early solution. Another problem mentioned later by Ted Nelson is that freedom to publish hypertext may mean that we loose current distinctions between more or less important published papers unless we define certain hypertext structures as e.g. an "official" version of some prestigious journal.

The Father of HyperText (Ted Nelson)

The coiner of the very word "hypertext," Ted Nelson from Xanadu gave a somewhat rambling talk on some of his hypertext ideas. It was illustrated with a colorful collection of slides from previous talks -- some going back to 1970. He wanted to see computers used for the generalized handling of ideas and said that the way personal computing is currently happening is plain wrong. People are drowning in small files with incomprehensible names and incompatible software.

When writing with current computers, we need to do "double maintenance" to keep the printouts and file consistent when making changes to one or the other. The Macintosh is just what Nelson calls a paper simulator and not an online solution. Instead we need to build a world where we can share paper-less information in the same way we can share paper: Books are always compatible!

Nelson related how he became interested in hypertext: As a student he became a serious note-taker and took notes all the time. He wanted to be able to see the original context of the note as well as its current use side by side, so he started hypertext as a student project and has been working on it ever since. In 1967 he gave his system the name Xanadu and by now it is actually working (we saw a demo at the conference).

By now, the real difference between Nelson and most other hypertext proponents is that he still argues for the universal hypertext which is to contain all literature in the world with interlinked references. To do this, he has invented an addressing scheme called tumblers which has the potential to give an unique address to every byte in all documents in the world. Of course such an open, universal hypertext system should expect to accumulate 100 Mbytes of info every hour and this may seem unrealistic at the present moment. But Nelson reminded us that it had also seemed unrealistic to have several 100 millions of telephones all over the world, all able to call each other.

Pioneers have Arrows in their Backs (Doug Engelbart)

Doug Engelbart who is now with the MacDonnel-Douglas Corp. did a lot of pioneering work back in the 1960s: He invented the mouse and developed a lot of word processing and hypertext concepts in his Augment system. The name Augment comes from the goal of the project which was to augment human capabilities. Many computer systems are fairly unsophisticated in augmenting human capabilities (Engelbart mentioned Apple as being in that category) but they have large markets. On the other hand Engelbart wanted to develop a system which was very sophisticated and which would significantly augment the capabilities of its users.

This augmentation process takes place in two systems according to Engelbart: The human system of culture, organizations, work procedures, skills, knowledge, and training on the one hand and the tool system of media, views, manipulation, retrieval, computation, and communication of data. Most designers aim at improving the tool system to automate an unchanged human system. But to truly augment the performance of human capabilities, Engelbart felt that we also needed to change the human system to reflect the new tools.

The Augment system [originally called NLS(12)] is still being used and now has about 100,000 articles stored online after 17 years of use by a total of about 2000 people. It runs on a group of DEC-20's but they have not gotten the funding to move it to more modern hardware. Engelbart gave a demo of Augment using a PC linked to the system via a modem and ran a small conference with Jim Norton who was sitting in his office in Montreal. This was a fairly nice demo, but the really striking aspect of the demo was that it was very similar to the breakthrough demo given by Engelbart at the Fall Joint Computer Conference in 1968. Engelbart was one of the most brilliant pioneers the computer field has seen, but his fate apparently has been that of getting arrows in his back in the form of reduced funding just as he had shown the feasibility of a system which was truly many years ahead of its time.

Practical Experience with hypertext

Not much practical experience exists at the moment. Most results presented under this heading really came from studies of research systems and not from actual everyday field use outside of research labs and university settings. But one of my mottos is that "data is data" (knowing something is better than knowing nothing). Even so, there is a real need for more empirical studies of how hypertext is really used and what the usability issues are.

Randy Trigg and Peggy Irish from Xerox PARC discussed the experiences of writers in NoteCards. They video taped 20 people writing in NoteCards in a naturalistic setting. One thing they noted was that many writers felt it necessary to move out of NoteCards at some point. E.g. a Master's student had to get his thesis out as a plain text file to submit to the university. But of course, most of the time they observed people actually using the system in different ways. In many cases, writers had fairly unstructured notes, sometimes in the form of "whiteboard" cards where they could put notes they otherwise did not know where to put. When organizing the material, writers sometimes used multiple overlapping organizations of the same notes; e.g. one organized according to subjects covered and one organized according to the structure of the final paper being written.

References and bibliographies (one of the prime targets for hypertexting) were done in several different ways. Some authors just had all their references lumped together, and others constructed elaborate structures. One reason for this is the different number of references(13) used by different authors. Finally, it was noted that the writers needed a way to modify the contents of some final document and then automatically have the source cards updated (a reverse hot link).

Davida Charney from Pennsylvania State University was planning a study of the reading strategies used by hypertext readers. These readers face the problem of loss of discourse cues. Traditional text which contains many such cues, ranging from genres (e.g. research paper vs. science fiction novel) over text-level schemas (e.g. the division of a research report into introduction, methods, results, conclusion, and references) to sequencing ("there are three reasons for..., 1..., 2..., 3..."), paragraphing and cohesive ties ("on the contrary..." etc.) showing how the previous relates to the next.

These cues are lost(14) when moving to a hypertext system which drops the reader in the middle of a new node in the same way no matter which node was the previous one. Also, in hypertext the burden of deciding when to read what has been moved from the writer to the reader even though structuring the material is one of the most important functions of an author.

On the basis of knowledge of how people read traditional texts, Charney conjectured that domain knowledge would have an important impact on reading performance in hypertext. Novice readers can be mislead by superficial relationships and may stop reading too soon (before they have found all the necessary information).

Charney suggested the following ways in which hypertext designers may help readers: Design reading strategies which 1) depend on or reveal the underlying structures of the information space, 2) depend on precedence, and 3) have repetition or consistent patterns.

Janet Walker from Symbolics presented the Document Examiner which is one of the few hypertext systems to see real commercial use. It is used for the online manuals for the Symbolics workstation and had two goals: To be a better tool for the technical writers and to allow easier access for customers. They set out to solve those two practical problems and not specifically to build a hypertext system as such even though that is what they ended up having.

Also, readers should not have to work as hard as writers, so they designed two separate user interfaces to the system. The one for the writers is called Concordia while the Document Examiner is the readers' interface. Currently the system has 10,000 nodes and 23,000 explicit links between nodes as well as 100,000 implicit links. Each node has an average size of 1 K, so the total system has 10 M of info. (15) The information was modularized by conducting an analysis of the information needs of users: If they would need or want to look up some specific issue then they made a module for that info.

The goals in the design of the user interface were as follows: 1) Keep the user's model of what is in the system as simple as possible, so don't use a network based navigation model. 2) Learn from how people use paper-based documentation and provide the same capabilities. 3) In addition, exploit what is unique about the computer.

Good things about paper-based documentation are:

  • One can turn directly to a known location.
  • One can look up things in the index (but there is a naming problem when many things have the same name -- e.g., "introduction" is no good if there are 1000 of them).
  • Bookmarks may be put in.
  • Annotations(16) are possible and present at the same location as the original material, yet are noticeably different.
  • One can find material by position.

On the other hand, computers can do the following things which paper cannot:

  • Full indexing of everything.
  • Dynamic cross-referencing.
  • Different views of the same info.
  • One can quickly find out what is not there if you ask for it and get a nil answer.

The Document Examiner project started 1982 and it was first shipped to users April 1985. The local engineering staff at Symbolics by now prefer the system over the paper version: About half of their engineers had not even taken the new manuals out of the shrink-wrap. I asked whether they had considered totally getting rid of the printed manuals and was told that they probably would do so but that it still was some years away. Also, of the 24 engineers surveyed, 2 said that they did prefer the printed version.

They have monitored the use of the system for a year and collected 68,000 data points, 43,000 of which are from "real users." It turns out that searching and context commands account for 40 % of the use.

Tradition and Creative Writing: Hypertext in the Humanities

A true multi-media version of this trip report would have started this section with a digitized clip of Zero Mostel singing Tradition! from Fiddler on the Roof. Certainly, Gregory Crane from the Classics Department at Harvard University stressed this concept enough to warrant the song. As many others at this conference he had the problem that his department did not view it as really relevant research to work with computers and text. But where Andy van Dam had that problem 20 years ago, Crane still has it. The problem is that hypertext does not give much short-term benefit compared with traditional ways of doing things.

Otherwise, the classics are an obvious field for use of hypertext, since they basically comment and annotate the same small basic set of primary literature over and over again. Scholarly texts make heavy use of references to these source documents. For example, a single page of Walter Burkert's Greek Religion contains pointers to more than two dozen source texts which the interested reader would ideally need to have available to get full benefit from studying Burkert's interpretations. In the Perseus Project, Crane and others are entering approximately 100 Mbytes of text about the classical Greek world into a hypertext structure which will be distributed on a CD-ROM or similar optical media. A big problem here is the automatic generation of links: Nobody is going to manually produce links between every word and the corresponding Greek-English dictionary. Also, it should be possible to view the hypertext structure as a generalized, abstract document which can be moved automatically to the next generation of hypertext system.

Another talk on the use of hypertext in the humanities was by Jay Bolter from the University of North Carolina on the Storyspace system. It is implemented on the Macintosh and is intended as a vehicle for creative writing of interactive fiction. Interactive fiction has existed for some time in the form of adventure games, even the simples of which can be viewed as a hypertext structure as the computer presents a different text as the result of reader/player action. Other movements have also tried to break down the traditional structure of text, e.g. the DADAists.

The reader's experience of interactive fiction is dependent on how it is accessed. For example, the length of the individual episodes presented by the computer determines the rhythm of the story: For how long is the reader a conventional reader of sequential text compared to reaching branching points. Storyspace has two modes: A structure editor showing the links between episodes and a reader mode with a more limited view of just the text without structure.

Bolter gave references to several works by the contemporary Argentinian writer Jorge Luis Borges which may inspire hypertext workers: "Ficciones," "An Examination of the Work of Herbert Quain," and "April March." Borges' "The Garden of Forking Paths" has been converted into a Storyspace structure of 100 episodes and 300 links. When reading this kind of interactive hypertext fiction, the synthesis of many readings is what will give the readers the full appreciation of the work. It may not be the case that all the branches should be plot-related, another possibility could be multiple narrators describing the same events [as in the classic film Rashomon by Kurosawa].

An interesting question from the audience was what would happen to the "authority" of the author if readers can decide how to move through fiction. In answering this question, Bolter noted that our notion of "author" has been colored by 500 years of experience with the printing press defining the role of the author compared with earlier traditions of e.g. telling stories around the camp fire.

George Landow of the English Department at Brown University discussed what he called the rhetoric of hypertext or in other words, the need for certain conventions for the links between nodes. hypertext preconditions the users to expect significant relations between files and a user becomes very disappointed(17) if a link is not rewarding, and this disappointment may even lead to hostility towards the system.

Landow suggested links with labels or whose spatial proximity indicate their probably destination. Also, we could have menus of links related to some starting place. Another of Landow's suggestions for a style guide for hypertext structures is that most text nodes should be at most one screen full (bigger texts should be broken up). A comment from the audience was that HyperCard has a set of icons which seem to have relatively well-defined meanings.

HyperCard

HyperCard from Apple was one of the most discussed pieces of software at the workshop. The general consensus seemed to be that HC was not really hypertext because of its limited possibilities for associating links with words.(18) As noted above, Frank Halasz criticized HyperCard for not having a structural overview or a browser to help users navigate stacks.

Andy van Dam in his opening speech called HC "beautifully engineered in spite of its many flaws" and suggested that it would "enculture" the computer community. It is simple enough to be widely used and is already emerging as somewhat of a cult(19) phenomenon. On the other hand, Jef Raskin said that HyperCard is only cheap and popular for the software itself. To run it, you need an expensive computer in the form of a Macintosh, so in reality it is "yuppie-text."

HyperCard was presented reasonably impressively in a demo by Mike Liebhold from Apple and it was also used by many of the other information structures shown in the lobby outside the lecture halls. One of the most interesting of those was the LaserCards system from Optical Data Corp. It would control a videodisk player from HyperCard showing e.g. satellite photos of weather formations and areas of geological interest. The by now almost "traditional" feature of getting to a photo of an area by clicking on its location on a map was of course supported. Other nice features were the "tours" of customized lessons through the information space -- almost the "trail blazing" of Vannevar Bush come to life.

One disappointment in relation to HyperCard is that its primary author and designer, Bill Atkinson did not attend the workshop. Instead the demo was given by Liebhold who is working on transferring the Whole Earth Catalog into a 17 Mbyte HyperCard stack.

Hypertext = Hype ? (Jef Raskin)

Jef Raskin from the company Information Appliance, which designed the Canon Cat computer, felt that hypertext was "one part inspiration and nine parts hyperbole" so he had taken the role of Devil's advocate at the workshop. He raised several problems with hypertext: If you trust your hypertext system too much, you will be faced with the "missing link" problem when it actually does not include some essential reference. Alternatively, links could conceivably be added as jokes or by vandals so that the structure ended up looking like New York City subway cars. Prepackaged (non-universal) hypertext would avoid those problems, but would then just like books have the problem that only some of them were any good.

Even if links are not added by vandals, they could still end up overwhelming the reader in an open system which has been accumulating links added by many people over the years. If readers are presented with hundreds of possible links(20) from each node, the system might as well not have had any links as it is very unlikely that any given reader will follow the specific link needed by that reader.

According to Raskin, too many people working on hypertext have concentrated on mechanisms instead of on the user interface. It is necessary to look at the entire spectrum of interaction and to do continuous user testing. He was surprised to see articles on hypertext that start out assuming the use of a mouse since his tests during the design of the Canon Cat indicates that keyboard-based interaction is faster. This computer was demoed at the workshop: It is a small system which is mainly intended to perform one task (wordprocessing): It is an "Information Appliance" [the name of Raskin's company].

Conference Ergonomics

This was more of a conference than a workshop. Most of the time was allocated to traditional paper presentations and only very little to discussion groups. In some sense this was OK as the paper presentations turned out to be more successful than the discussion groups. The fairly small number of participants still encouraged a workshop feel to the event as did the large number of Mac II's and other computers set up in the lobby.

A fairly nice feature was the small number of parallel sessions which didn't make you feel that you were missing out on most of the papers and which substantially increased the probability that the people you were talking with had attended somewhat the same presentations as yourself. Even so, it was of course not possible to attend everything. One talk I missed was Ben Shneiderman's presentation of his Hyperties system. This was actually on purpose: Not because I expected Shneiderman to give a bad talk (quite on the contrary) but for two other reasons. I had heard him give a talk on the same system just a few months before at the HCI Intl.'87 conference in Honolulu and I had tried the system myself(21) in New York during a field installation at the International Center of Photography. I simply went to the museum to see the photography exhibits and discovered that Hyperties was part of one of them. This is the way most people will discover hypertext: Not as something interesting in itself but embedded in a larger context which is what really interests them.

 


See Also: Report from the following conference, Hypertext'89


Footnotes

(Due to the lack of pop-up notes in most WWW browsers, these footnotes have been placed at the bottom of this document even though they were designed to be read on the same page as the body text they refer to.)

Footnote 1. For example graphics, sound, moving images from videodisks, executable programs.

Footnote 2. Cf. the discussion by van Dam of GOTO vs. other control structures.

Footnote 3. See my trip report from CSCW'86, ACM SIGCHI Bulletin 19, 1 (July 1987), pp. 54-61.

Footnote 4. This 20 year time lack between laboratory invention and commercial realization seems to be quite typical of many breakthroughs in computer science which is not moving so quickly as some may think. There was also about 20 years between Engelbart's invention of the mouse in 1963 and the use of it on a popular PC (Lisa) in 1983. Cf. also the comment by Tom Landauer that a rule of thumb at Bell Labs was that it would take 15 years from Lab concept to Real World application (cited in my CHI'86 trip report, IFIP INTERACT Newsletter No. 17).

Footnote 5. The space program -- not the computer.

Footnote 6. Good idea. Consider, say, the file names on the Macintosh which may be 31 characters. At first this seemed a big improvement over the 8 or so characters in many other operating systems. But after some time users often hit that limit when they take advantage of the fact that they only have to type a name once (afterwards they can click on the name instead of typing it).

Footnote 7. Cf. the comment by Jens Rasmussen at the INTERACT'87 conference that cognitive engineering research fell between the borders drawn up by traditional research councils so that the best research support came from sources such as NATO, see my trip report in ACM SIGCHI Bulletin 19, 4 (April 1988), pp. 36-42.

Footnote 8. See the description of this course from the English Department's point of view: James V. Catano: "Poetry and computers: Experimenting with the communal text", Computers and the Humanities 13 (1979), pp. 269-275.

Footnote 9. Towards the end of the workshop, Norman Meyrowitz (Brown University) reported from the working group on standards for hypertext. They had felt that a full standard would be premature but that we should get a standard for data and structure interchange soon.

Footnote 10. Getting lost seems to be a worse user interface problem than navigation itself. In a small field study I did of a Guide document, users found the reference button (which is the one that really moves you to a different place in the structure) significantly less easy to understand and less of a usability/readability improvement than the note button and the replacement button [about a difference of one whole point for each question on a 1-5 Likert scale].

Footnote 11. As another example I can mention the problems I have had with an earlier report which I only published as a hypertext document. According to the law in Denmark, all published material is to be deposited in the National Library so that every citizen can borrow it for free through the public library system. I could of course have deposited the diskette, but this would have been against at least the spirit of the law as most libraries or citizens would not have had a compatible computer system. So I ended up with printing out as good a version of the hypertext network as I could and depositing that together with the diskette.

Footnote 12. NLS = oN Line System.

Footnote 13. Use of between 10 and 150 bibliography entries were observed.

Footnote 14. A comment from the audience was however, that hypertext is actually more structured than some of the information people face -- e.g. all the articles in all the journals in the library. So one should not just compare hypertext with a highly organized and structured textbook.

Footnote 15. The printed version has 8,000 pages.

Footnote 16. Annotations are one of the features not yet included in the Document Examiner. One reason for this is the difficulty with helping users maintain their notes across different releases of the documentation.

Footnote 17. This is in correspondence with my own results from a field study of hypertext: Readers commented that they were frustrated by clicking on things they wanted to know more about and then not always actually getting information. This really ties in with the comment by Gregory Crane about the need to automatically generate a lot of links: Hypertext structures need to be very rich to live up to reader expectations. In my case, the problem was that my hypertext report did not have that many links because I had written it myself. In Landow's case, there may also be a problem if there is a link but to the "wrong" info, so we also need types or other prospective views on the links.

Footnote 18. HyperCard will only link whole cards -- i.e. big chunks of text.

Footnote 19. About half a year after the introduction of HyperCard, the Berkeley Macintosh User Group already had about 225 HC stacks [i.e. hypermedia structures] in its disk library.

Footnote 20. I took part in the working group on the disorientation problem which concluded that it would be important with filters for links to avoid presenting them all to users. Also, following a link is not necessarily either/or: Perhaps one can follow a link partly using some principle of progressive disclosure of what will happen if one goes further down a link. This is e.g. done in in Hyperties where each link first shows a single line definition of the node reachable further down the link.

Footnote 21. See the discussion in my article in the ACM SIGCHI Bulletin 19, 1 (July 1987), pp. 54-61.


Share this article: Twitter | LinkedIn | Google+ | Email