Fifth Generation 1988 trip report

by Jakob Nielsen on December 31, 1988

Tokyo, Japan, 28 November - 2 December 1988  

The Fifth Generation project is a large Japanese research project aiming at producing a new kind of computer by 1991. It was originally started after much debate about the necessity for significantly more usable computers which should proliferate "like air" throughout society and among other things take care of the ageing population and life-long learning. The MITI people who sponsored the project must have had a good marketing consultant to pick the name of the project, for just its very name has inspired a lot of interest around the world.

This conference was held to report on the results from the intermediate phase of the project just before the final phase of integrating together all the innovations to a complete computer system. Unfortunately, the results were somewhat disappointing from the perspective of whether something new could be done with the fifth generation computer. Most of the applications presented at the conference were interesting because they were "X done in logics programming"-not because they were "X done better than before." The hope of course is that the final computer will be fast enough to run programs which are infeasible on normal computers. We will have to wait and see.

The project director, Kazuhiro Fuchi gave the keynote speech and compared the three project stages to "hop, step, and jump," saying that they had now taken the step and were getting ready to jump in the final part of the project when they will produce a massively parallel machine. Fuchi was also very enthusiastic about natural language processing which he said would be the link between human and machine in their project.

Many of the Japanese scientists and engineers I talked to from outside the Fifth Generation project were actually quite skeptical about the direction of the project and not very hopeful about spectacular results. Maybe even some of the project leaders themselves have had this feeling since they had started a new laboratory for applications-oriented research within the project. The purpose of this laboratory is to verify whether the parallel computers and systems software built by the other laboratories can be used for the next generation of expert systems.

The Sputnik Effect

Actually, to some extent, the biggest result of the Fifth generation project came by before they even started on their own research, since the very fact that the Japanese were doing a big computer project scared a lot of European and American decision makers half to death. Fortunately they were not scared completely to death but instead decided to "counter-attack" by funding a lot of new research in different computer fields. Some of the projects which were started as a result of this are the European Esprit, the British Alvey, and the American MCC.

Representatives from these three initiatives were invited to the conference to report on their own progress which has been fairly substantial. Timothy Walker from the U.K. Information Engineering Directorate told of the varying British information technology projects which had succeeded in getting a number of key scientists to return to the U.K. from overseas. Several changes had been made in the projects over the years, including some name changes, such as changing "AI" to "knowledge based systems" because of lowered expectations. On the other hand, the area of human-computer interaction was receiving more emphasis now: At the start of the Alvey project 5 years ago, HCI might have been seen as important but not so much was done about it, whereas now they realized that they had to make a serious effort to ensure usability. Walker said that HCI could either be done as an independent field of study or integrated with other topics and that they had chosen to base their projects mostly on the latter view.

Science as the Driving Force in Society

Jörg Siekmann from the University of Kaiserslauten in West Germany discussed the potential future impact of AI. and computer technology. His main these was that we are moving towards a society where research takes on a productive force in itself, or a Wissenschaftsgesellschaft as he called it (something impressive about these long German words). This is in analogy with the industrial revolution where agriculture suddenly could be done by a small part of the population and the major force became industrial production. Now this production can also be handled by a small number of people, while the number of scientists on the other hand is growing rapidly. Siekmann reported that the number of scientific journals doubles every 15 years, the number of books in university libraries doubles every 10 years, and the number of scientific publications doubles every 5 years. The knowledge explosion has already reached a level where in chemistry it is often cheaper to conduct a possible duplication of an experiment than to search the literature to find whether the result of a previous experiment can be used.

New methods would be needed to handle all this information, and those who can do so will be able to use science as the force which provides substantially improved new products-or even products which we cannot conceive of today. As an example, traditionally you could think of a bank as a building; now it is a computer network and the nature of its world-wide services can be changed by a handful of programmers at the head office. This view is to a large extent true when it comes to user interface research. Ideas such as hypertext represent fundamentally computer-based technologies which have at least the potential to give knowledge workers significantly improved productivity and/or creativity and/or scope of analysis.

Prospects for Cognitive Science (Herb Simon)

One of the invited lectures was given by Herb Simon from Carnegie Mellon University who discussed the history and future of cognitive science. Perhaps having the heavy focus on hardware development in the fifth generation project in mind, Simon asked whether hardware speeds had really been the bottleneck to progress in the cognitive science field. In some areas like chess playing it clearly had, but in general Simon felt that hardware was not a real issue when it came to the ideas pushing forward cognitive science since it was not normally necessary to have a program perform in real time. Actually even for chess playing, Simon stressed that the best human players are quite slow and only look at maybe 100 states before making a move. A computer may have to look at 3 million positions to make the same move, so probably the right way of looking at the problem would remove the need for fast hardware.

In many particular domains programs exist which surpass human intelligence. In hindsight, Simon felt that we should have been able to predict that expert systems would be easier to do than common sense or a sensory apparatus since the sensory/motor system is evolutionary much older than human expertise. To get knowledge into a system, there are basically two methods: learning (which is used to get knowledge into humans) and programming (which is used to get knowledge into computers). Currently we cannot use programming on humans, but maybe it would solve many of the problems in AI if we could shift to using learning for computers.

Simon discussed several other research frontiers within cognitive science, including robotics and natural language and one of the more interesting issues he raised was the need for methods for non-verbal representation of knowledge such as diagrams and pictures. Currently we use a few such methods such as showing which files belong in the same folder by showing their icons within the same window. But most of these visual languages are quite primitive in their expressive power compared to verbal languages in spite of the prevalence of picture-like representations in human reasoning.

Computer-Supported Jazz

One of the most interesting presentations at the conference from the user interface perspective was given by Keiji Hirata from NTT who talked and played about computer-supported jazz. The goal of the work was to produce a workstation for musicians called ICOTone and to have it generate jazz music. Hirata had encoded the music theory for tension and other concepts in bebop style jazz on the PSI (Personal Sequential Inference) machine, thereby allowing users to construct a jazz performance by interactively specifying different aspects of the piece. Users could specify different amounts of jazz information depending on their level of skill such that expert users can get a high degree of control over the result while novices can still get the computer to play jazz. Hirata finished his presentation by playing a tape of a performance by his system which was fairly good and did sound jazzy. I am not going to throw away my Dizzy Gillespie CDs just yet to replace them with this system, though, since the jazz workstation is more in the nature of an interesting piece of research showing hope for the future.

Company Visits

Besides going to the conference, I also lectured at

  • Japan Institute of Systems Science, Osaka
  • Mitsubishi Electric Central Research Lab
  • Toshiba R&D Center
  • University of Tokyo

and had meetings with people from Dynaware Corp., Kyoto Institute of Technology, Nippon Telegraph and Telephone (NTT), and Tokyo Denki University.

Dynaware was introduced to me as a small start-up software company. I visited the company fairly late in the evening after having been out for dinner and to the home of my host for wonderful green tea, but even at 10 PM the Dynaware offices were crowded. They showed me several integrated text/graphics systems and an advanced graphics editor which all had very nicely polished user interfaces.

Unfortunately, with the exception of the Dynaware designs, most of the user interfaces I saw in Japan were not very polished or visually attractive. This is especially strange considering that the country is famous for its high sense of aesthetics, even in small things. And the blandly looking designs can not be blamed on poor graphics capabilities either since everybody seemed to have big, high-resolution displays. My guess is that the emphasis in Japanese user interface design has been on accommodating their special character sets and language (including heavy emphasis on natural language and knowledge based systems). If a system is good at dealing with Japanese and will do something useful, it will sell no matter how it looks.

So probably the most important aspect of Japanese user interfaces is one which I am not qualified to judge. I was showed lots of menus and other interface elements in Kanji but can only report on one small point: In one system where the other commands were listed in Kanji, the undo command was still listed in English. I asked about this and was told that they had not been able to come up with a good translation of the concept, so they were perfectly happy with having that one word in European characters.

My own experience confirms that it can be difficult to translate "undo." The original 1984 translation of undo in the Danish Macintosh system was the completely miserable "glem" (=forget) which many novice users were scared to use during studies in my laboratory. In the current release, Apple Denmark has changed this to the much better "fortryd" (=regret).

Except for the not very impressive graphic appearance of the user interfaces, there were several impressive aspects of the Japanese work in user interfaces. Advanced hardware is one area where the big Japanese electronics companies are doing extremely well, and I was shown impressive advances in optical media and also saw a nice hypermedia system using videodisks and a knowledge based interface.

Another interesting system was a multi-media translation system which would scan in a page from an English magazine and after an OCR phase translate the text to Japanese but keep the same page layout for the text in relation to the illustrations.

One of the bigger user interface projects in Japan had been the design of by Hiroshi Tamura and colleagues of a hypertext system for a Japanese translation of Smith and Mosier's fat book of user interface guidelines. Just the translation in itself had been a major project (one person-year) but had been parceled out to user interface professionals at several companies. The hypertext presentation system had been built somewhat faster using dBase III as an engine. Considering that at least two other hypertext systems has been built over the same information base, it would be interesting to perform a comparative experiment to look at the effectiveness of the different hypertext approaches. If such an experiment is to include the Japanese system, it must either be done entirely in Japan or as an intercontinental collaboration which would probably be a first in our field.

Electronic Stationary Goods

One of the more spectacular user interface ideas I was shown in Japan was the "electronic stationary goods" in the Tron project at the University of Tokyo managed by Ken Sakamura. The idea here was to externalize the desktop metaphor from the computer screen to the I/O units, and use e.g. a pen instead of a mouse. This has of course been done in many other systems, but the Tron people went several steps further and also included other kinds of desktop items as computer peripherals. I was especially impressed by their electronic eraser which looked like an ordinary eraser (used to erase pencil marks) but which could be used on the computer tablet to delete marks on the computer screen. It was an interesting experience to use a paint program by physically changing between using a pen and an eraser instead of using a mouse to select a soft mode from a list of icons in a screen panel. Of course the weakness of this new approach may be that users would be confused by having the wide variety of tools used in some of the more modern paint programs as physical objects: Their office would end up looking like the studios of a graphics artist with lots of pens, knives, etc. all over the place.

Singapore

On my way to Japan I stopped over in Singapore where I visited the Institute of Systems Science (ISS) at the National University of Singapore (NUS).

Probably because of sponsorship from IBM, they had a lot of IBM RT workstations; not exactly computers you see in a lot of other laboratories. But the machine itself is good enough with a nice display and they had designed some good user interfaces on it, using scanned color photographs as well as all the standard trappings of pop-ups, icons, etc. Of course there was a few things I would have done differently and I also saw one case of mixed navigational metaphors--but then I find that problem in almost half of the laboratories I visit: for example, many designs used left-right in icons but up-down in visual effects. In general, the user interfaces I saw in Singapore looked good and also demonstrated interesting theoretical concepts such as generation of hypertext structures from a frame-based knowledge representation.

The ISS-people gracefully invited me to their annual institute party which was a fun event with multilingual jokes and puns in English, Arabic, Tamil, and Mandarin as well as other Chinese dialects.

There is no doubt that Singapore is one of the countries of the future and that the ISS can lead the way locally as well as contribute to the international user interface community.

First Exposure to the NeXT User Interface

Wherever I went in Asia, everybody asked me whether I had seen the NeXT machine. And since I had been embarrassed to have to say "no," I was glad to finally get a chance to see this machine after several years of rumors on the nets when I stopped over in the US on the way back to Europe. I stopped in New Mexico to participate in the ACM Conference on Document Processing Systems where the demonstration session had a display of a NeXT computer at the Adobe exhibit on Display Postscript.

On a purely emotional level, seeing the NeXT cube struck me somewhat like seeing the Lisa back in 1983 (in fact, I later noticed that the BYTE editorial on NeXT [December 1988] had the heading Lisa Lives ): The feeling that this computer looked different from the ones I had been used to. It is amazing how much just the (careful) use of four level of grayscale can do to the visual appearance of computer screens. Dialog boxes have an almost 3-dimensional effect where input fields seem to be chiseled out.

Things are still thrown away by dragging to the bottom right corner of the screen where instead of a trash can there is a strange icon which I could not understand. Upon asking I was told that it was a black hole. Cute - cute indeed - and it does avoid lawsuits. But the black hole it not the kind of user interface metaphor which your average neighborhood grocer is likely to relate to. Of course you might say that the grocer is not one of the intended customers for a $6,500 computer, but how about five years from now?

The poor guy from Adobe who was going to use the NeXT for his demo of Display Postscript was swamped with requests for demonstrations of the NeXT interface itself rather than his rendering software. After all, for most people it is much more interesting what is written on the computer screen than how the pixels get turned black.


Share this article: Twitter | LinkedIn | Google+ | Email