How will we interact with computers a decade or two from now? Could we manipulate digital objects with more than just our fingertips? Will screens become obsolete? Chris Harrison spends his days trying to answer these very questions. An assistant professor at Carnegie Mellon University, in Pittsburgh, he directs the Future Interfaces Group, an engineering playground where he and his students conduct what he calls “time-machine research.” By hacking or cobbling together existing technologies, they are exploring new, more expressive ways of communicating with machines.

Among the group’s myriad inventions are a smart watch that wearers can manipulate mechanically, a light projector that turns any surface into a touch screen, and a tablet application that allows users to summon different digital tools, such as a pen or a magnifying glass, simply by changing how they touch the screen. Future computers could easily have such capabilities, Harrison says, although they almost certainly won’t look like anything we can imagine today. Decades before humans invented the airplane, he points out, people drew pictures of flying sailboats and bird-drawn carriages. “They got the concept right but the implementation wrong,” he says. And consumer electronics are no different. “When we envision possible future interfaces, we assemble them out of the things we know.”

So who knows what must-have gadgets will exist in 2064? But whatever form they take, Harrison is certain that our interactions with them will be more natural and versatile—that is, more like our interactions in the real world.

For more on the Harrison lab and other wearables research, see “Wearable Computers Will Transform Language.”

Transcript:

Chris Harrison: We probably have more processing power in this room, on this floor of this building, than the entire United States government did 20 years ago. Pretty obscene.

We are here in a new lab space, which is called the Future Interfaces Group, where we make new interfaces and sensors to make the interactions between humans and computers better, more fluid, more natural, and more powerful. So one of the kind of styles of research that we do here in the lab is one you might call “time-machine research,” which is this notion that you kind of cobble together things that you can build today to kind of take a peek at what technologies might be like in 10 years’ time.

And so what it does is it lets us ask the interesting questions about what’s going to be useful. So if we kind of glue this prototype together and build the experience, it may be a very expensive experience, but we can say, “Is this interesting? Is this useful?” And if it is interesting and useful, then that makes the case that we should actually build these devices and make them better. But until you actually build it, it’s often hard to know.

So if you wanted to simulate what a smartphone is like 5 or 10 years from now, you’d put, you know, 20 smartphones or 20 computers in a closet, and instead run a little cable out to a small screen to simulate that processing power, because we don’t have any smartphones that are that powerful today. And so by doing this, and kind of cobbling together what we can barely do today, we can really kind of take a better understanding of what’s going to be possible and commercially feasible tomorrow. What might cost [US] $1,000 today will cost $100 in a couple years’ time.

This notion of having interactivity everywhere is a wonderful concept. This is sort of why we put computers in our pockets, is we want to have computational capability, so information and information retrieval, communication, and so on, with us all the time.

You can have kind of a personal display, kind of like a heads-up display that covers your eyes, and it augments your vision, or you can actually directly project onto the environment with projectors. What I really like about the second approach is that by having it embedded in the environment for everyone to see, not just yourself, you can have kind of a shared experience that everyone can participate in. So it’s not that I see something and you don’t, or I get an augmented experience that’s different than yours, is we can actually have something that we can all walk up to together, and I can see that you’re using an augmented surface as opposed to just a blank wall, and I can understand that you’re doing that. I can also understand if you’re interruptible, I can also come over and help and see what you’re doing and say, “Hey, what are you working on?”

Humans like collaborating. We like walking up to things and grabbing whiteboard markers and working together, and I think there’s a danger of losing that if we go to totally virtual. So rather than just having it be a private augmented reality, I like this notion of having a physical—augmenting the real world around us and making that powerful.

We also asked ourselves this question of, “Well, what does it mean to have tools in a digital medium?” You hold a hammer in a very particular way, or a camera, or scissors, and how you hold it gives you an affordance for using that tool. And so a lot of the research we’re looking at is how do you make touch screens have better modality by capturing more interesting and powerful dimensions of touch.

It isn’t just a matter of pure computer science. There’s a design component. There’s a social-science component. There’s a cognitive-science component. And really, only if you understand all those things are you going to make something that’s truly awesome.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s video programming is the video.

Source:http://spectrum.ieee.org/video/consumer-electronics/portable-devices/chris-harrisons-time-machine

Source:http://spectrum.ieee.org/video/consumer-electronics/portable-devices/chris-harrisons-time-machine

arrow
arrow
    全站熱搜

    Shacho San 發表在 痞客邦 留言(0) 人氣()