Air guitar: Software interprets signals sent from electromyography sensors attached to a forearm, enabling the user to control computer games such as Guitar Hero and Rock Band.
Credit: Microsoft
Multimedia
video  See how a muscle-sensing computer interface interprets specific hand gestures.
video  Watch the muscle-computer interface coupled to Microsoft Surface.
It's a good time to be communicating with computers. No longer are we constrained by the mouse and keyboard--touch screens and gesture-based controllers are becoming increasingly common. A startup called Emotiv Systems even sells a cap that reads brain activity, allowing the wearer to control a computer game with her thoughts.

Now, researchers at Microsoft, the University of Washington in Seattle, and the University of Toronto in Canada have come up with another way to interact with computers: a muscle-controlled interface that allows for hands-free, gestural interaction.

A band of electrodes attach to a person's forearm and read electrical activity from different arm muscles. These signals are then correlated to specific hand gestures, such as touching a finger and thumb together, or gripping an object tighter than normal. The researchers envision using the technology to change songs in an MP3 player while running or to play a game like Guitar Hero without the usual plastic controller.

Muscle-based computer interaction isn't new. In fact, the muscles near an amputated or missing limb are sometimes used to control mechanical prosthetics. But, while researchers have explored muscle-computer interaction for nondisabled users before, the approach has had limited practicality. Inferring gestures reliably from muscle movement is difficult, so such interfaces have often been restricted to sensing a limited range of gestures or movements.

The new muscle-sensing project is "going after healthy consumers who want richer input modalities," says Desney Tan, a researcher at Microsoft. As a result, he and his colleagues had to come up with a system that was inexpensive and unobtrusive and that reliably sensed a range of gestures.

The group's most recent interface, presented at the User Interface Software and Technology conference earlier this month in Victoria, British Columbia, uses six electromyography sensors (EMG) and two ground electrodes arranged in a ring around a person's upper right forearm for sensing finger movement, and two sensors on the upper left forearm for recognizing hand squeezes. While these sensors are wired and individually placed, their orientation isn't exact--that is, specific muscles aren't targeted. This means that the results should be similar for a thin, EMG armband that an untrained person could slip on without assistance, Tan says. The research builds on previous work that involved a more expensive EMG system to sense finger gestures when a hand is laid on a flat surface.

The sensors cannot accurately interpret muscle activity straight away. Software must be trained to associate the electrical signals with different gestures. The researchers used standard machine-learning algorithms, which improve their accuracy over time (the approach is similar to the one Tan uses for his brain-computer interfaces.)
 

"We spent a lot of time trying to figure out how to get the user to calibrate the device in an appropriate way," says Tan. The software learns to recognize EMG signals produced as the user performs gestures in a specific, controlled way.

The algorithms focus on three specific features from the EMG data: the magnitude of muscle activity, the rate of muscle activity, and the wave-like patterns of activity that occur across several sensors at once. These three features, says Tan, provide a fairly accurate way to identify certain types of gesture. After training, the software could accurately determine many of the participants' gestures more than 85 percent of the time, and some gestures more than 90 percent.

Especially in the early stages of training, a participant's gestures need to be carefully guided to ensure that the machine-learning algorithms are trained correctly. But Tan says that even with a small amount of feedback, test subjects "would fairly naturally adapt and change postures and gestures to get drastically improved performance." He says that having users trigger the appropriate response from the system became an important part of the training process.

"Most of today's computer interfaces require the user's complete attention," says Pattie Maes, professor of media arts and sciences at MIT. "We desperately need novel interfaces such as the one developed by the Microsoft team to enable a more seamless integration of digital information and applications into our busy daily lives."

Tan and colleagues are now working on a prototype that uses a wireless band that can easily be slipped onto a person's arm, as well as a "very quick training system." The researchers are also testing how well the system works when people walk and run while wearing it.

Ultimately, says Tan, full-body control will lead to fundamentally new ways of using computers. "We know it has something to do with gestures being mobile, always available, and natural, but we're still working on the exact paradigm," he says.

Copyright Technology Review 2009.