Researchers from the University of Technology Sydney (UTS) have unveiled a revolutionary contraption – a portable, non-intrusive system with the extraordinary ability to unravel a person's silent thoughts and translate them into readable text.
In a demonstration released by the team, we can see a person wearing an electroencephalography (EEG) headset or cap, which helps read the brainwaves produced when we are engaged in any activity. In this case, the braincap measures the activity to transmute thoughts into words.
Translating the brain's language into understandable terms requires invasive surgeries for brain implants, like Elon Musk's Neuralink – or via costly MRI scans.
"This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field," said Professor CT Lin, director of the GrapheneX-UTS HAI Centre.
Portable, non-invasive, mind-reading AI
The researchers explained in a press release that an EEG wave, which represents electrical activity in the brain, is divided into separate units with unique characteristics and patterns.
This segmentation is achieved using an artificial intelligence (AI) model, DeWave, developed by the researchers in-house. DeWave can understand and interpret EEG signals by learning from extensive datasets of EEG data.
Essentially, DeWave acts like a translator for EEG signals. It takes the complex patterns and information embedded in the EEG wave and converts them into understandable forms, such as words and sentences.
This process is made possible by training the AI model on a large amount of EEG data, allowing it to recognise and associate specific patterns in the brainwave signals with meaningful linguistic representations.
The end result is a way to express and communicate the information contained within EEG signals in a more accessible and human-readable format.
"It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening up new frontiers in neuroscience and AI," said Prof Lin.
System currently shows only 40% efficiency
For the grand experiment, the researchers invited 29 daring participants. The sample size indicates that their results are "likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals…"
There are concerns, though. The model tends to favour synonymous pairs instead of providing precise translations when dealing with nouns, which represent people, places, or things. In an example given, if the original word is 'the author', the model might generate a synonymous term like 'the man' instead of the more accurate translation.
"We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures," said Yiqun Duan, first author of the study.
The translation proficiency of the system is currently at 40%, but the team hopes to take it to 90%.
The study was published in arXiv and is yet to be peer-reviewed.