A couple days ago, I published A Brief Rant on the Future of Interaction Design. Here are a few common responses I've received, and some thoughts on them.
Now, I will never speak of any of this again, and deny it ever happened.
Yes, that's why I called it a rant, not an essay — it describes a problem, not an idea. (And FWIW, that's not the sort of thing I typically publish, or want to.)
The solution isn't known, and I didn't think making stuff up would help.
The point of the rant was that the solution will be discovered through long research, and that research won't happen without an awareness of the problem. My intent was to hopefully coax a few potential researchers and funders into tangible interfaces, dynamic materials, haptics, and related fields that haven't been named yet. If that happens, then mission accomplished.
No! iPad good! For now!
In 1900, Eastman's new Kodak camera was good! The film was black-and-white, but everyone loved it anyway. It was genuinely revolutionary! But it was also obvious that something was missing, and research continued. Practical color film was released thirty years later, and gradually took over.
On the other hand, despite the invention of holography 60 years ago, it was less obvious that two-dimensional single-perspective photographs are missing something important. Maybe nobody ranted enough. I would guess that 3D photography hasn't gotten much research love, given that even in 2011, we think that just re-focusing a photo is a pretty neat idea.
Today, iPad good. It's flat and glassy and everyone loves it anyway. It's genuinely revolutionary. But if all we have in twenty years is an iPad with a few haptic gimmicks, that'll be bad.
I asked us to aim for a "dynamic medium that we can see, feel, and manipulate". Those things aren't dynamic, so no.
A computer screen is a dynamic visual medium — it can visually represent almost anything. A dynamic tactile medium should be able to tangibly represent almost anything. That might be through a descendent of deformable materials (like these), a descendent of haptic holography (like this), crazy nanobot assemblies, or probably some technology not yet conceived.
I don't really care which. Research whichever one catches your fancy. (Personally, I'm only interested in designing interfaces and applications on top of the technology, not the implementation of the technology itself. I don't care at all how the iPad's LCD panel works, but I'm sure glad someone else did.)
Sure. Let's use voice for the things people use voice for — asking questions and issuing commands. But I'm personally interested in tools for creating and understanding.
Creating: I have a hard time imagining Monet saying to his canvas, "Give me some water lilies. Make 'em impressionistic." Or designing a building by telling all the walls where to go. Most artistic and engineering projects (at least, non-language-based ones) can't just be described. They exist in space, and we manipulate space with our hands.
Understanding: If you simply want information — "What's the price of AAPL over the last three years" — then an "oracle" like Wolfram Alpha is fine. But I believe that deep understanding requires active exploration, and I'm much more interested in explorable environments. Look at the interactive graphics in the Ladder of Abstraction essay, especially the later ones. You come to understand the system by pointing to things, adjusting things, moving yourself around the space of possibilities. I don't know how to point at something with my voice. I don't know how to skim across multiple dimensions with my voice.
(But hey, if voice is what gets you excited, then go ahead and contribute to that field! That's great! My impression is that natural language processing already gets way more love than haptics, but my impression might be skewed from living with a bunch of Stanford NLP researchers over the last few years.)
My impression is that there are three categories here.
A large vocabulary of discrete abstract gestures is essentially a sign language. So I would think that everything that applies to voice also applies in this case.
A small set of spatially-directed gestures at a distance (point to objects on a screen and move them around — Oblong / Minority Report) is indirect manipulation, not direct manipulation. It's kind of just a fancy mouse. I personally think this is a step backwards, and my (brief) experience has not been positive.
Directly manipulating a virtual 3D representation (Iron Man, if I remember right?) would have the problem of not being able to feel what you're manipulating, which I suspect would really throw off your proprioceptive senses. My (brief) experiences trying to manipulate objects by Waving My Hands In The Air have not been positive, because my hands were rarely where I thought they were. (But maybe throw some haptics in there, either via gloves or remotely-induced, and you might have something.)
The conventional means of interfacing the brain to the world (ie, the body) was developed over millions of years, and works splendidly. Today's computer interfaces were hacked together over a few decades. If there's a mismatch between our bodies and our computers, don't you suspect that the fault might lie on the computer's side? Perhaps our computers should be adapted to fit our bodies, instead of blithely bypassing our bodies entirely.
We've almost given up on the body already. We sit at a desk while working, and sit on a couch while playing, and even sit while transporting ourselves between the two. We've had to invent this peculiar concept of artificial "exercise" to keep our bodies from atrophying altogether.
It won't be long before almost every one of our daily activities is mediated by a "computer" of some sort. If these computers are not part of the physical environment, if they bypass the body, then we've just created a future where people can and will spend their lives completely immobile.
Why do you want this future? Why would this be a good thing?
You know, I kind of wish Jean Piaget was still around to watch kids using touchscreens and figure out what's really going on.
Instead, here's a quote from neuroscientist Matti Bergström:
The density of nerve endings in our fingertips is enormous. Their discrimination is almost as good as that of our eyes. If we don't use our fingers, if in childhood and youth we become "finger-blind", this rich network of nerves is impoverished — which represents a huge loss to the brain and thwarts the individual's all-around development. Such damage may be likened to blindness itself. Perhaps worse, while a blind person may simply not be able to find this or that object, the finger-blind cannot understand its inner meaning and value.
A child can't understand Hamlet, but can understand Cat In The Hat. Yet, it's Shakespeare, not Dr. Seuss, who is the centerpiece of our culture's literature.
As we grow up, our minds develop and our bodies develop. A tool for adults should take full advantage of the adult capabilities of both mind and body. Tools that are dumbed down for children's minds or children's bodies are called "toys".
Channeling all interaction through a single finger is like restricting all literature to Dr Seuss's vocabulary. Yes, it's much more accessible, both to children and to a small set of disabled adults. But a fully-functioning adult human being deserves so much more.
Perhaps you may be interested in Magic Ink: Information Software and the Graphical Interface. It's super-brief!