Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak.
In a scientific first, Columbia
By monitoring someone’s brain activity, the technology
These findings were published in Scientific Reports.
This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.
Dr. Mesgarani
Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain — but instead could be translated into verbal speech at will.
But accomplishing this feat has proven challenging. Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies.

But because this approach has failed to produce anything resembling intelligible speech, Dr. Mesgarani and his team, including the paper’s first author Hassan Akbari, turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking.
“This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions,” said Dr. Mesgarani.
News Source: https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech