Microsoft's most innovative product of the 1990s was Interactive Barney: a plush toy containing a computer that lets it interact with kids. When you squeeze Barney's toe, for example, he sings a song; when you cover his eyes, he plays peek-a-boo.
Soon, many more physical objects may become interactive, and they're likely to contain much more broadly defined and subtle user interfaces than the primitive toe squeezing that Interactive Barney pioneered.
NanoMuscle is a company that makes very small motors that are an order of magnitude stronger and smaller than traditional electrical motors, yet they use a fraction of the electrical power and they're much cheaper. One possible early application of such motors is toy characters with realistic movements and facial expressions. Think, for example, of Brian Aldiss' Teddy supertoy as visualized by Steven Spielberg in the film A.I.
Over a recent dinner, NanoMuscle's CEO Rod MacGregor mentioned other possible future applications, such as mobile devices that would morph into different shapes. Why should a physical product be restricted to a single shape when it has different uses? If you're making a phone call, for example, an elongated form factor is desirable, but when you're using the same device to view data, a square shape might work better.
I'm particularly excited about the possibilities of using tiny motors to include force feedback -- which responds to the intensity of user touch -- in input devices such as joysticks and mice, as well as newer, smaller controls that would reach beyond the computer. These advances would let you feel the objects you are clicking and any borders you drag them across. Of course, dragging should feel very different if snap-to-grid were on.
Liberating Interface Design
For almost thirty years, user interface design has been defined as the design of graphical user interfaces, with an emphasis on the visual appearance of the user's choices. All users could do with these choices was bang on them with the mouse button. Gestural interfaces have largely vanished, except for obscure virtual reality research and a sprinkling of gestures in long-gone pen-based systems like the Apple Newton and the Go tablet.
If physical objects begin to understand a wider range of gestures, like movement and squeezing, along with the force and speed with which users apply these gestures, we could then free user interfaces from the screen. Furthermore, of course, the computer could also express its side of the dialogue physically. Presenting facial expressions on a moving doll is a much more promising user interface component than simply pasting the expressions onto a GUI, as in projects like Boo and Ananova.
Gestural interfaces also offer new possibilities for video conferencing, which we used to think would succeed only when it were possible to build high-resolution, people-sized computer screens so that remote participants could appear as big and clear as people in physically proximate reality (PPR). Appear on a small screen, and you play a small role in the meeting. Current video conferencing is like puppet theater, with you as the puppet, if you are the remote participant.
With gestural interfaces, we could give remote participants a true presence in PPR -- not necessarily as talking teddy bears, but rather as animatronic players. People like myself who often lecture overseas would likely invest a good deal in a full-sized avatar that could serve as their physical presence while they delivered talks remotely.
Although for now, that's all just so much science fiction, a renaissance for gestural interfaces could well happen soon as part of the trend toward liberating interaction design from the screen.
New Usability Challenges
Physical interactions will require good usability just as much as visual interactions; we're simply replacing one form of syntax with another that has more degrees of freedom. And more choice for interaction design means more ways to make things difficult for users.
Let's consider Microsoft's Barney. Squeezing his toe is definitely not a great design, as it has little relation to singing songs. Covering his eyes to play peek-a-boo is significantly more compelling, and this interaction command is the easiest of all Barney's features to learn and remember. The reason? It's probably because the feature is an example of a non-command user interface: You control the computer by doing what you want (to play peek-a-boo) instead of asking the computer to do it. While imperative interaction style works well for many traditional tasks, non-command designs often feel better, if they're done right.
"Done right" is the key phrase, of course, and getting non-command user interfaces right requires a deeper understanding of users' tasks and behaviors than we see in current designs, which often fight against users' needs.
Physical interfaces require even more simplicity and usability than graphical user interfaces. A problematic GUI is unpleasant, to be sure, but a smart environment that is nasty and hard to control could be disastrous.
See More Examples
Saul Greenberg at the University of Calgary has a collection of physical user interfaces designed by his students. Most come with video demos.
Interesting to explore the richness of interaction designs possible. The collection also highlights the need for usability in any actual products (as opposed to student projects). An email notifier that shoots Nerf disks at you when new email arrives? Not in my office.
Prof. Greenberg's research agenda aims at making physical devices extremely easy to program, so people can concentrate on design rather than the technical intricacies. His project is called Phidgets (physical widgets). Time will tell whether Phidgets can become the HyperCard of physical interfaces.
Share :Twitter | LinkedIn | Google+ | Email