Scientists have successfully converted a paralysed man’s brain waves into speech. 

Researchers in the US have developed a neuroprosthetic device that can translate the brain waves of a paralyzed person into complete sentences.

The device picks up signals from the man’s brain to the vocal tract and converts them directly into words that appear as text on a screen.

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralysed and cannot speak,” says UC San Francisco neurosurgeon Dr Edward Chang.

“It shows strong promise to restore communication by tapping into the brain's natural speech machinery.”

Previously, simpler devices have been used to restore communication through spelling-based approaches, allowing patients to type out letters one-by-one in text. 

But the new approach translates signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. 

Dr Chang says this will allow more rapid and organic communication.

“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. 

“Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”

The breakthrough has been facilitated by patients at the UCSF Epilepsy Center who were undergoing neurosurgery to pinpoint the origins of their seizures using electrode arrays placed on the surface of their brains. 

These patients, all of whom had normal speech, volunteered to have their brain recordings analysed for speech-related activity. Early success with these patient volunteers paved the way for the current trial in people with paralysis.

Researchers had also prepared by mapping the cortical activity patterns associated with vocal tract movements that produce each consonant and vowel. They then developed methods for real-time decoding of those patterns and statistical language models to improve accuracy.

To translate the patterns of recorded neural activity into specific intended words, the other two lead authors of the study, Sean Metzger and Jessie Liu (both bioengineering doctoral students in the Chang Lab) used custom neural network models, which are forms of artificial intelligence. 

When the participant attempted to speak, these networks distinguished subtle patterns in brain activity to detect speech attempts and identify which words he was trying to say.

The team found that the system was able to decode words from brain activity at a rate of up to 18 words per minute with up to 93 percent accuracy (75 percent median). The researchers were able to help out by implementing an “auto-correct” function, similar to what is used by consumer texting and speech recognition software.