Learn how brain researchers used AI models to reconstruct music from brain waves and what this new study may mean for brain-controlled speech prosthetics.
Scientists recently got one big step closer to creating better assistive technology devices for people who can’t speak. Brain researchers at Albany Medical Center trained a computer to analyze the brain activity of individuals while they listened to a Pink Floyd song and then successfully reconstructed a section of the song using a computer model. This was the first time such a model—based on computer learning and artificial intelligence—was able to reconstruct a recognizable piece of music using only neural patterns. Prior research had only been able to reconstruct sounds similar to the music a person was hearing.
The researchers emphasized that this technology is still in early experimental stages, and the reconstructed audio was relatively low quality. However, this is a huge step forward in brain research that has vast implications for creating more expressive technology to help people with brain damage or speech-related disabilities communicate.
An Innovative New Study With Exciting Results
Published in the journal PLOS Biology in August, this new study focused on 29 participants with epilepsy. The individuals were chosen because they already had nets of electrodes implanted in their brains as part of a deep brain stimulation treatment for epilepsy. Participants listened to Pink Floyd’s “Another Brick in the Wall, Part 1” while a computer recorded their brain signals. Researchers then analyzed data from each participant, determining which areas of the brain were activated during the song and recording which frequencies created a response in each of these areas.
Because the number of frequencies represented impacts the quality of an audio recording, the researchers used 128 frequency ranges to effectively reconstruct “Another Brick in the Wall” based on the gathered data. This required them to train 128 computer-learning models to convert the brain signals gathered by electrodes into audio, ultimately producing a song snippet. While it sounds slightly muffled, the resulting music clip is still distinctly recognizable. (You can listen to the original clip and the AI-reproduced version here.)
Each participant’s song recreation was slightly different, which researchers credit to the unique locations in which electrodes were placed in each individual’s brain. Interestingly, they believe some of the variance was affected by personal characteristics, such as whether the individual was a musician. Additionally, the researchers could only see brain activity where doctors had placed electrodes for seizure treatment. This limited the amount of data that could be captured and also accounts for part of why the recreated songs sound hazy.
Unlocking How the Brain Processes Music and Sound
The study’s researchers chose the famous track from Pink Floyd’s 1979 album, The Wall, for a few different reasons. To get optimal results, they wanted to use music that the older patients enjoyed. Researchers also wanted a complex song with both vocal and instrumental sections to analyze how the brain processes melody versus language.
The study’s findings confirmed that while both hemispheres of the brain play a role in music perception, the right hemisphere is more involved than the left. When people process plain speech, the left hemisphere of the brain is more active. This helps explain why some people who have experienced strokes may struggle to speak but can sometimes sing simple sentences.
Researchers found a spot in the brain’s temporal lobe that seems particularly active in processing music, with a specific subregion connected to rhythm. This supports previous research, which has found that different parts of the brain are linked to processing particular aspects of music, such as pitch and timbre (tone quality).
Promising New Applications for Brain-Controlled Prosthetics
This new study is more than just an interesting but novel application of AI in brain research. The findings are a significant step toward creating improved brain-controlled prosthetic technology for people who can’t speak. The hope is that as researchers develop more advanced models based on this technology, it will result in better prosthetics and other assistive technology for people who have lost the ability to speak due to brain disease or brain damage.
Over the past decade, scientists have made major advances in translating the brain’s electrical signals into words—but there’s more to speech than simply saying words. A lot of information comes from what linguists call prosodic elements: the rhythm, stress, and intonation in someone’s speech. Think about how mechanical a robotic voice is—that’s what speech without prosodic elements sounds like.
If scientists better understand how the brain processes music and complex sounds, they can use this knowledge to develop improved speech prosthetics—devices that assist in the production of speech and language. More expressive speech prosthetics could translate brain activity in more complex ways to create more natural-sounding speech. This would enable more effective communication for people with speech issues due to neurologic diseases, strokes, or injuries.
Why Brain Research Is Crucial
The brain is a complex organ that plays a role in every one of our daily functions, so each new discovery in brain research has the potential to create a ripple effect. While the researchers’ ability to recreate a song based on brain activity is stunning, it’s incredible to think that continued research could restore the gift of speech to someone living with aphasia after a stroke.
That’s why it’s essential to invest in research across a broad range of brain diseases and disorders—every new finding takes us one step closer to treatments and cures for the millions of people living with brain disease.
Stay updated on the latest news from the American Brain Foundation by following us on Twitter and Facebook. Only through research will we find cures for all brain diseases and disorders. Donate today to make a difference.