Mind-reading AI turns paralysed man’s brainwaves into instant speech

uttu
4 Min Read


The man being connected to the brain-computer interface system

A man with paralysis being connected to the brain-computer interface system

Lisa E Howard/Maitreyee Wairagkar et al. 2025

A man who lost the ability to speak can now hold real-time conversations and even sing through a brain-controlled synthetic voice.

The brain-computer interface reads the man’s neural activity via electrodes implanted in his brain and then instantaneously generates speech sounds that reflect his intended pitch, intonation and emphasis.

“This is kind of the first of its kind for instantaneous voice synthesis – within 25 milliseconds,” says Sergey Stavisky at the University of California, Davis.

The technology needs to be improved to make the speech easier to understand, says Maitreyee Wairagkar, also at UC Davis. But the man, who lost the ability to talk due to amyotrophic lateral sclerosis, still says it makes him “happy” and that it feels like his real voice, according to Wairagkar.

Speech neuroprostheses that use brain-computer interfaces already exist, but these generally take several seconds to convert brain activity into sounds. That makes natural conversation hard, as people can’t interrupt, clarify or respond in real time, says Stavisky. “It’s like having a phone conversation with a bad connection.”


To synthesise speech more realistically, Wairagkar, Stavisky and their colleagues implanted 256 electrodes into the parts of the man’s brain that help control the facial muscles used for speaking. Then, across multiple sessions, the researchers showed him thousands of sentences on a screen and asked him to try saying them aloud, sometimes with specific intonations, while recording his brain activity.

“The idea is that, for example, you could say, ‘How are you doing today?’ or ‘How are you doing today?”, and that changes the semantics of the sentence,” says Stavisky. “That makes for a much richer, more natural exchange – and a big step forward compared to previous systems.”

Next, the team fed that data into an artificial intelligence model that was trained to associate specific patterns of neural activity with the words and inflections the man was trying to express. The machine then generated speech based on the brain signals, producing a voice that reflected both what he intended to say and how he wanted to say it.

The researchers even trained the AI on voice recordings from before the man’s condition progressed, using voice-cloning technology to make the synthetic voice sound like his own.

In another part of the experiment, the researchers had him try to sing simple melodies using different pitches. Their model decoded his intended pitch in real time and then adjusted the singing voice it produced.

He also used the system to speak without being prompted and to produce sounds like “hmm”, “eww” or made-up words, says Wairagkar.

“He’s a very articulate and intelligent man,” says team member David Brandman, also at UC Davis. “He’s gone from being paralysed and unable to speak to continuing to work full-time and have meaningful conversations.”

Topics:

Share This Article
Leave a Comment