Logical Creature

Hey, Sometimes I write Too! Mostly it's about Tech.

Post Page Advertisement [Top]

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.


The system consists of a WEARABLE DEVICE and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations—saying words "in your head"—but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

he device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetechably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

Credit: Lorrie Lejeune/MIT
The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

The researchers describe their device in a paper they presented at the Association for Computing Machinery's ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they're joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

"The motivation for this was to build an IA device—an intelligence-augmentation device," says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"

PRACTICALITY OF THE DEVICE?

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.
But, Kapur says, the system's performance should improve with more training data, which could be collected during its ordinary use. Although he hasn't crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.(MORE OF MACHIINE LEARNING IS REQUIRED).
Here's a Video demonstrating the product.--

THOUGHTS.
Should be helpful for spies and would be great if you want to speak on a phone in a noisy environment. It definitely qualifies as a problem solving product. If you like to learn more about this product head here!! Massachusetts Institute of Technology,ACM
Thank you for reading!!


No comments:

Post a Comment

Bottom Ad [Post Page]