Skip to content

QualityPoint Technologies News

Emerging Technologies News

Menu
  • About Us
  • Technology
  • Medical
  • Robots
  • Artificial Intelligence (AI)
  • 3D Printing
  • Contact Us
Menu

Huge Discount Offer: 14 ebooks + 2 courses

Brain-computer interface guides speech-disabled person’s intended words to computer screen

Posted on August 27, 2023

Using a brain-computer interface, a clinical trial participant who lost the ability to speak was able to create text on a computer at rates that approach the speed of regular speech just by thinking of saying the words.

Scientists with the BrainGate research collaborative have reached a major milestone in restoring speech for people who have lost the ability to speak due to paralysis.

In a new study published in Nature, the researchers describe using sensors implanted in areas of the cerebral cortex associated with speech to accurately turn the brain activity of a patient with ALS who lost the ability to speak into words on a screen just by the patient thinking of saying them.

The clinical trial participant — who can no longer use the muscles of her lips, tongue, larynx and jaws to enunciate units of sound clearly — was able to generate 62 words per minute on a computer screen simply by attempting to speak. This is more than three times as fast as the previous record for assisted communication using implanted brain-computer interfaces (BCIs) and begins to approach the roughly 160-word-per-minute rate of natural conversation among English speakers.

The study shows that it’s possible to use neural activity to decode attempted speaking movements with better speed and a larger vocabulary than what was previously possible.

The study is the latest in a series of advances in brain-computer interfaces made by the BrainGate consortium, which along with other work using BCIs has been developing systems that enable people to generate text through direct brain control for several years. Previous incarnations have involved trial participants thinking about the motions involved in pointing to and clicking letters on a virtual keyboard, and, in 2021, converting a paralyzed person’s imagined handwriting onto text on a screen, attaining a speed of 18 words per minute.

The researcher says that with credit and thanks to the extraordinary people with tetraplegia who enroll in the BrainGate clinical trials and other BCI research, they continue to see the incredible potential of implanted brain-computer interfaces to restore communication and mobility.

One of those extraordinary people is Pat Bennett, who having learned about the 2021 work, volunteered for the BrainGate clinical trial that year.

Bennett, now 68, is a former human resources director and daily jogger who was diagnosed with ALS (amyotrophic lateral sclerosis) in 2012. For Bennett, the progressive neurodegenerative disease stole her ability to speak intelligibly. While Bennett’s brain can still formulate directions for generating units of sound called phonemes, her muscles can’t carry out the commands.

As part of the clinical trial, neurosurgeon placed two pairs of tiny electrodes about the size of a baby aspirin in two separate speech-related regions of Bennett’s cerebral cortex. An artificial-intelligence algorithm receives and decodes electronic information emanating from Bennett’s brain, eventually teaching itself to distinguish the distinct brain activity associated with her attempts to formulate each of the phonemes — such as sh sound — that are the building blocks of speech and compose spoken English.

The decoder then feeds its best guess concerning the sequence of Bennett’s attempted phonemes into a language model, which acts essentially as a sophisticated autocorrect system. This system then converts the streams of phonemes into the sequence of words they represent, which are then displayed on the computer screen.

To teach the algorithm to recognize which brain-activity patterns were associated with which phonemes, Bennett engaged in about 25 training sessions, each lasting about four hours, where she attempted to repeat sentences chosen randomly from a large data set.

As part of these sessions, the research team also analyzed the system’s accuracy. They found that when the sentences and the word-assembling language model were restricted to a 50-word vocabulary, the translation system’s error rate was 9.1%. When vocabulary was expanded to 125,000 words, large enough to compose almost anything someone would want to say, the error rate rose to 23.8%.

The researchers say the figures are far from perfect but represent a giant step forward from prior results using BCIs. They are hopeful of what the system could one day achieve — as is Bennett.

For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships. Imagine how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation — even arguing — will be when nonverbal people can communicate their thoughts in real time.

News Source: Brown University

Share

Related News:

  1. 5 Children get 3D-printed Lab-Made Ears Grown From Their Own Cells
  2. Research team develops clinically-validated 3D printed stethoscope
  3. A wearable system to monitor the stomach’s activity throughout the day
  4. Calcium-based MRI sensor enables more sensitive brain imaging
Master RAG ⭐ Rajamanickam.com ⭐ Bundle Offer ⭐ Merch ⭐ AI Course

  • Bundle Offer
  • Hire AI Developer

Latest News

  • Stanford Researchers Develop AI Agents That Simulate Human Behavior with High Accuracy May 23, 2025
  • ​Firebase Studio: Google’s New Platform for Building AI-Powered Applications April 11, 2025
  • MIT Researchers Develop Framework to Enhance LLMs in Complex Planning April 7, 2025
  • MIT and NVIDIA Unveil HART: A Breakthrough in AI Image Generation March 25, 2025
  • Can LLMs Truly Understand Time Series Anomalies? March 18, 2025
  • Can AI tell us if those Zoom calls are flowing smoothly? March 11, 2025
  • New AI Agent, Manus, Emerges to Bridge the Gap Between Conception and Execution March 10, 2025
  • OpenAI Unveils GPT-4.5, Promising Enhanced AI Performance February 28, 2025
  • Anthropic Launches Claude Code to Revolutionize Developer Productivity February 25, 2025
  • Google Unveils Revolutionary AI Co-Scientist! February 24, 2025

Pages

  • About Us
  • Basics of 3D Printing
  • Key Innovations
  • Know about Graphene
  • Privacy Policy
  • Shop
  • Contact Us

Archives

Developed by QualityPoint Technologies (QPT)

QPT Products | eBook | Privacy

Timesheet | Calendar Generator

©2025 QualityPoint Technologies News | Design: Newspaperly WordPress Theme