By Jennifer Bails

It’s one of the top meetings worldwide for researchers who spend their days (and nights) searching for a “Steve Jobs” type of breakthrough that will impact how the rest of us interact with our computers. Scientists from Disney, Google, IBM, Intel, Microsoft, Samsung, and many other major technology companies make sure to show up for it every fall so they can learn about the latest innovations in human-computer interfaces coming out of universities and research labs.

As a measure of the significance from this annual meeting—called the Association for Computing Machinery’s Symposium on User Interface Software and Technology (UIST)—every six minutes, on average, year-round, a paper from UIST is downloaded. That’s 87,000 copies annually. It’s just a matter of time before the ideas presented in these papers end up in the latest devices that all of us must have.

For example, every smartphone display now automatically turns from portrait to landscape when you rotate the device. That technology was first described in a paper presented at UIST in 2000 by Microsoft researcher Ken Hinckley, a former student of the late Carnegie Mellon professor Randy Pausch. The wall-size touchscreen you will be glued to on CNN during November’s presidential election came from the conference, as did so many of the cartoon-style animation tools on our computers.

When Chris Harrison packed his bags last fall for the 24th UIST symposium in Santa Barbara, Calif., he suspected he had a game-changing idea. He believed it could transform the way we work, the way we play, the way we communicate. It was an ambitious hunch for the fifth-year Carnegie Mellon PhD student, who was still a relative newbie in the computing world. But Harrison had a track record of making big things happen.

For as long as the 27-year-old can remember, he has had a creative tinkering bent—whether building an adapter to run an old laptop from his car battery or whiling away childhood hours developing a computer game where angry gorillas hurled bananas at each other. Harrison even spent one teenage summer designing and building a siege engine called a trebuchet, used in medieval times to pulverize castle walls. He received local media coverage—and drew a crowd of onlookers in one of his hometown’s parks—when he tested his trebuchet replica and successfully launched massive rocks at imaginary fortresses hundreds of yards away.

It’s this kind of imaginative impulse that led Harrison to the field of computer science after he graduated high school in 2002. That year, sales of a new mp3 player called the iPod were off the charts, and camera phones and wireless headsets were making headlines as they hit the marketplace. “It was really becoming glamorous to be a computer scientist,” Harrison reflects.

He studied computer science as an undergraduate and master’s student at New York University. There, he developed an interest in the space where computer science and behavioral sciences intersect with design, known as human-computer interaction. The field aims to study and improve the relationship between people and computers. It’s why we hate our alarm clocks, but love using our iPhones. “They made it into something you want to use,” Harrison says.

At Carnegie Mellon, research in human-computer interaction can be traced to the start of the School of Computer Science in 1965. Faculty founders Allen Newell, Herbert A. Simon, and Alan J. Perlis—all Turing Award recipients, considered the Nobel Prize of computing— believed that computer science should include the study of social phenomena surrounding computers, not just the theory and design of computation devices themselves. Today, more than 60 faculty and staff work within Carnegie Mellon’s Human-Computer Interaction Institute, where Harrison began his PhD studies in 2007.

Harrison modestly describes himself as a “tinkerer,” and a quick glance at the funky décor inside his lab in Newell-Simon Hall reveals the exceptional range of his imagination and talent. A skilled craftsman, he knows how to work with metal, glass, plastics, ceramics, and wood. For instance, he fabricated a table for the lab by welding together old computer motherboards. He reclaimed rusty exhaust manifolds from a metal salvage yard, which he repurposed into a modern lighting fixture.

Nearly two dozen such “fun projects,” including the trebuchet, are listed on his Web site, together with about 40 PhD and MS research projects. Also catalogued online are his stunning visualizations of huge datasets—such as all the books on Amazon and all the Wikipedia topics—which have been displayed in museums and galleries internationally. In addition, he has mapped his travels to 50 countries, including Azerbaijan, Jordan, and Uganda, which he says has provided him with creative fodder for his research: “In developing countries, you can’t just go to the auto mechanic to have new parts from the factory installed. I’ve seen people repair cars with duct tape. That kind of engineering creativity has always been very inspiring to me.”

He knew it would take more than duct tape, though, to tackle the paradox of mobile devices—a challenge that had long interested him and aggravated the rest of us. Simply put, computers have dramatically increased in capability while decreasing in size; smartphones today are more powerful than desktop computers were a decade or two ago. That’s all good. However, engineers haven’t figured out how to miniaturize these powerful devices without shrinking their interactive surface area. That leaves us smaller and smaller touchscreens, cramped buttons, and teeny jog wheels. As much as you might love your iPhone, you wouldn’t want to peck out your dissertation or a novel on its little screen, let alone try to read on it anything of substantial length. Entire Web sites are devoted to the funny messaging gaffes people make with the slip of a finger thanks to the autocorrect feature.

It’s a classic Catch-22. Engineers can’t enlarge the device’s size because who wants to schlep around an oversized smartphone? But engineers can’t scale down the size any further because of the limits of our vision and dexterity of our fingers. So we grit our teeth and text on, one tiny keystroke at a time.

In 2008, Harrison and his PhD advisor, Scott Hudson, started exploring a potential solution to this quandary through a project called Scratch Input. They used a bioacoustic sensor to “listen” to the high-frequency sound produced when a fingernail is dragged over a textured material like wood or paint. Coupling this sensor with a mobile device could allow any floor, piece of furniture, or article of clothing in contact with the device to serve as an input surface. For instance, if you had a smartphone in your pocket and wanted to silence an incoming call, you could just drag your fingernail on your jeans.

 

Scratch Input later evolved into Skinput, which took acoustic sensing and actually put it on the human body. The system used an armband with an array of sensors to analyze the mechanical vibrations that propagate through your skin and bones when your fingers tap your body. Combining the armband with a tiny projector, which beams an image of a touchscreen, could turn the palm of your hand, for example, into an interactive monitor.

You might wonder why anyone would want to perform computing on their own bodies. The idea sounds almost creepy, and indeed, Popular Mechanics magazine two years ago called Skinput one of the top “Weird Science Stories” of 2010. “Sure, it looks a little crazy in pictures,” Harrison admits. “But when it’s actually on the skin, you find it’s incredibly intuitive because we are so familiar with ourselves.”

Harrison worked on Skinput as an intern at Microsoft Research in Redmond, Wash. He and his colleagues there presented their work at a 2010 international Conference on Human Factors in Computing Systems and won a Best Paper award. Soon afterward, a YouTube video of Skinput in action—featuring a jogger controlling his mp3 player by tapping on his palm and a gamer playing Tetris on his forearm—quickly generated nearly 700,000 hits, and the project was named one of the top 10 biggest technology stories of the year by New Scientist magazine.

Still, Harrison realized Skinput wasn’t a perfect solution. The clunky armband had some accuracy problems and was limited to use on the human body, not transferrable to the general environment. So he returned to Microsoft for another internship last year with the intention of taking the idea to the next level by replacing the acoustic sensors with a depth-sensing camera that could “see” a room in 3D.

He mounted the camera (akin to that used in the Microsoft Kinect for the Xbox360) on a user’s shoulder, where it tracked finger movements by looking for anything within arm’s length that was roughly sausage-shaped. It then took the depth information it collected to work out when and where the fingers touched the image of the screen generated by a tiny “pico” projector. The system—called OmniTouch—also automatically adjusted that image to account for the shape and orientation of the surface.

Sounds complex, but the results are stunningly simplistic. “I don’t want to call it magic, but it feels like magic—it can turn bare walls and your own arms into interactive surfaces,” Harrison says.

With OmniTouch, by wearing the shoulder-mounted rig, your palm could become a tablet for jotting down notes or painting a digital picture. Maps projected on a wall could be panned and zoomed by swiping and pinching. You could plop down in front of your TV and just open your hands, which would serve as a remote, to change the channel. Or you could read this article on the counter at Starbucks and then order another latte with the simple tap of your finger. How about writing your novel using the booth countertop as a keyboard?!

“All of these scenarios that used to be thought of as science fiction are becoming reality,” says Microsoft researcher Hrvoje Benko, who worked on the project. “That’s exactly what OmniTouch illustrates—it’s basically an exploration of what it would mean if you could turn any available object into an interactive surface.”

After several months of work, Harrison, bags packed, and his Microsoft colleagues went public with OmniTouch at the highly anticipated 2011 UIST symposium. There, in front of the leading researchers in their field and industry professionals from around the world, they presented OmniTouch, which enabled users to click away on a wall or their hands instead of a standard touchscreen. “People totally got it,” Harrison says. “Typically, when you give someone a crazy new system, you have to let them play with it for about 10 minutes before you do any testing, or your data get messed up. But almost right away, our users knew what to do—and they wanted to know if they could use our demo system to dial their friends!”

OmniTouch has continued to generate buzz in the press and the computing world since that conference. “This is really essential work in our field as we are moving toward ultra-mobile devices, and Chris is totally a rock star,” exclaims Patrick Baudisch, chair of human-computer interaction at the Hasso-Plattner Institute in Germany.

Maximum PC magazine has called OmniTouch a display technology that will “change the way you see the world.” Wired UK dubbed it one of the “25 big ideas for 2012.” And Forbes magazine recently named Harrison to its “30 under 30” list of “today’s disruptors and tomorrow’s brightest stars” in science.

Harrison estimates it will be a few years before OmniTouch or something like it is available for consumers. The shoulder-mounted rig needs to shrink. “People don’t want to go to class or on a date with a huge thing on their shoulder—well, maybe at Carnegie Mellon,” he quips.

The ultimate goal is to make the technology completely unobtrusive—a consumer version could be smaller than a matchbox, worn as a watch or necklace. Within the next decade, experts agree, we could be reminiscing how before OmniTouch, we were all shackled to keyboards or squinting at phone screens.

Jennifer Bails is an award-winning freelance writer. She is a regular contributor to this magazine.

Related Links:

Forbes Names Harrison to "30" undr "30" List