DAILY MAIL & GUARDIAN 15-December-1999

New World

Computer 2001


JAMAIS CASCIO advises suits and other Hollywood weasels on what could happen over the next hundred years. He worries about being called a futurist.



I

t's all very well and good to talk about what computers may look like in the distant future of ten years from now. Ubiquitous computing, smart materials, nanochips -- these are the terms to toss around in parties if you want to sound a bit too smart for your own good. But in the near future -- from the next few months to the next few years -- computing will likely have a much more traditional look.

But that can still be interesting.

So, what will the computer of the next few years look like? More importantly, how will the interface evolve? This is not an inconsequential question -- as more "non-technical" people get online (over 100 million people online around the world as of this month, by the way), the more computers will have to learn to work with people, instead of the other way around.

First of all, solemn pronouncements to the contrary, the PC -- as a stand-alone machine that sits on your desk -- is not dead. Even if the concept of "convergence" (the idea that media machines such as televisions and information machines such as PCs are becoming one and the same) comes true, it will take some time for the era of your cheap, basic PC to go by the wayside. There remain many tasks that are simply inappropriate for a tarted-up TV.

One of the big mistakes that many people make when talking about user interfaces is assuming that a single design will have to work for both beginners and veterans of the digital world. This confusion is why the latest version of popular computer interfaces -- Windows98 and MacOS 9 -- feel in many respects more awkward to use than their predecessors. Wizards and Managers that are designed to "make things easier" end up getting in the way once a user knows her way around a computer, yet the users are expected from the outset to understand a variety of terms and behaviors that are neither intuitive nor obvious. It's a mess.

Add to that the fact that the physical media of interaction are not always human-friendly. Whenever I see kids playing on computers, I cringe. Not because of what they may find on the Internet, but because of what they're doing to their bodies. I guarantee you that we will see epidemics of teenagers with repetitive stress injuries in the coming years. There are also drawbacks to using standard monitors; LCD-based screens are far easier on the eyes -- yet they remain far more expensive.

Furthermore, there are increasing numbers of people with disabilities getting online. I don't just mean people born with physical problems. As the populations of many Western countries ages, an increasing proportion of users will be people with varying sorts of handicaps associated with aging -- failing eyesight, deafness, and motor control problems. The disabled will demand the ability to use information systems just as easily as anyone else.

In the famous revolutionary phrase, what is to be done?

The pieces are slowly coming together for what I like to call the "Adaptive Human-Computer Interface". This will be a system that learns as it goes, watches your behavior, and is able to change itself on the fly, in real-time, to adapt to changes in how you use your system. Such as system could present a simplified display interface to beginners, but a more complex arrangement for people with greater computer experience. This system should be as easy to use with verbal commands as it is with mouse and keyboard. It must be easily adapted to voice output -- in any language -- and not absolutely require a visual display.

Ideally, such a system will have a camera built in. This would facilitate two important advances: user recognition and gesture recognition. User recognition means that the system will see that it's you sitting there, and display the appropriate sort of interface, beginner or veteran; if a stranger sits at the machine, a limited system (or complete lock-out) would be the result. Gesture recognition means that the computer could learn that particular motions with your hands (or, if your hands are unavailable, with your head or even eyes) have particular meaning.

In some cases, you'll need to physically upgrade pieces of a system in order to accommodate or create new capabilities. For example, Apple was dead-on correct to include a single-button mouse for people who are new to computers. I've worked with computing beginners, and very often they don't even know how to hold a mouse, let alone which button to push. A single-button mouse makes it easier. But a single-button mouse is also extremely limiting for people who have a moderate amount of computer experience. It inflicts a beginner interface on experienced users. In my case, I added one of the Microsoft Intellimouse Pro units to my Mac. It gives me 5 buttons plus a scrolling wheel to work with. It's terrific for me, but I know it would completely confuse anyone just starting out.

So, let's put this all together.

The computer of the next few years will look very much like today's machine: a display (hopefully a flat screen), a keyboard, and a pointing device (probably a mouse of some sort). The differences will be subtle, but important.

Voice input and control has gotten good enough that it's now generally usable. My guess is that we'll see 98% accurate voice input, already available now as a separate application, built into the operating system within 3 years. While this will make some offices noisier -- and spawn a rash of new sound viruses -- it will make the computer easier for beginners to use, and a far more flexible tool.

Voice output is less likely in the very near future, but I suspect it is inevitable. In the US, the Americans with Disabilities Act mandates the use of accommodations for the disabled. As the US population ages, there will be more pressure to make computers easy for the blind to use. Once voice-output tech becomes useful and readily accessible, non-disabled users will discover that it is very handy for them, too.

Visual input is a wildcard. Cameras have gotten fairly inexpensive (one national Internet Service Provider in the US is giving away a free digital camera to anyone who signs up), but there is no standard for use. Furthermore, the sorts of face and gesture recognition I talk about above is fairly processor-intensive. You won't be able to run it on your old 486.

But you can begin to imagine how the different pieces could work together. If I'm sitting at my system and new mail arrives, I can gesture at the mailbox icon, and the system will show me the new message. If I get up from my chair, the camera notes that I'm not there, and a voice alert tells me that another message has arrived. I can then (from the other room) say "read it to me"; if I need to, I can say "start again, louder". The system listens, watches, and adapts.

This is not science fiction. This is possible today. Someone just needs to write the code. Anyone out there need a thesis project?

© Daily Mail & Guardian - 15-December-1999


* Jamais Cascio is a consultant and writer specializing in scenarios of how we may live over the next century. His clients have included mainstream corporations, film and television producers. He has written for many publications, including Wired and TIME, and is currently working on a screenplay. He is an active member of the oldest and most influential online community, The Well, and believes that new technologies are pushing people into new social, economic and political realms.



Get PCReview by email
Join the PCReview mailing list and receive our top stories and regular features once a week, by email.
* Subscribe now







RELATED ARTICLES

* NEW WORLD ARCHIVES

* Arthur Goldstuck: Webfeet
* Gavin Dudley: Dr Byte
* Rupert Neethling: Toolbox
* Douglas Rushkoff: Online


CYBERSPACE


* Comments to Jamais Cascio
* Jamais Cascio homepage

  

em&g main menus

 
 
Published weekly by the Electronic Mail & Guardian, Johannesburg, South Africa. Send email comments to the editor, Gavin Dudley