Written by 9:09 pm Generative AI

### Utilizing AI and EEG Tech to Translate Quiet Reflections into Written Words

Researchers created a revolutionary system that can non-invasively convert silent thoughts into tex…

For individuals experiencing speech impairments due to illnesses or injuries, a pioneering program has been created by researchers to convert passive thoughts into spoken words, opening up new avenues for communication.

The cutting-edge system translates these mental signals into language by leveraging an AI model named DeWave and a smart EEG cap that monitors brain activity. This streamlined setup achieves exceptional EEG translation performance using DeWave, surpassing previous techniques that required invasive procedures or lengthy MRI scans.

The technology has practical implications in controlling devices like robotic arms or robots and shows promise in improving human-machine interactions and aiding individuals who are unable to communicate verbally.

Key Highlights:

  1. The AI system DeWave is utilized to convert EEG signals into words and sentences while monitoring brain activity.
  2. The technology aims to reach the efficiency levels of traditional language translation programs and has demonstrated an approximate 40% translation accuracy on the BLEU-1 scale.
  3. It has been tested on 29 participants with diverse EEG patterns, offering a more adaptable and less intrusive alternative to earlier technologies.

The forefront of this advancement is the University of Technology Sydney.

Researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system capable of interpreting silent thoughts and translating them into text. This breakthrough represents a significant leap forward in the field.

Individuals unable to speak due to conditions like strokes or paralysis could benefit from the communication capabilities of this technology. Moreover, it has the potential to facilitate seamless communication between machines and humans, similar to the operation of a robotic arm or robot.

Before the emergence of Elon Musk’s Neuralink, a technology that is bulky, expensive, and impractical for everyday use, translating brain signals into language required either electrode implantation procedures or MRI scans. Credit: News from Neuroscience

The research findings will be presented in a concise document at the NeurIPS conference, scheduled for December 12, 2023, in New Orleans, showcasing cutting-edge research in artificial intelligence and machine learning.

Under the leadership of Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, the study involved primary author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

Throughout the study, participants silently read text passages while wearing a cap equipped with an electroencephalogram (EEG) to capture electrical brain activity across their scalps.

The EEG data is segmented into distinct patterns representing unique characteristics and trends in the human brain. The researchers utilize their AI model, DeWave, to accomplish this task. DeWave utilizes extensive EEG data to learn the conversion of EEG signals into coherent words and sentences.

According to Distinguished Professor Lin, this study signifies a groundbreaking effort in directly translating raw brain signals into language.

It introduces an innovative approach to neural processing, being the first to integrate distinct encoding methodologies in the brain-to-text translation process. The fusion with prominent language models is also paving the way for advancements in both biology and AI.

Previous technologies for translating brain signals into language either required MRI scans, which are cumbersome, costly, and impractical for daily use, or invasive electrode implantation surgeries, akin to Elon Musk’s Neuralink.

These methods also encounter challenges in converting brain signals into word-level segments without additional tools like gaze tracking, which limit the practicality of such systems. Eye tracking can be incorporated with or without the new technology.

The UTS study involved 29 participants. Due to the unique EEG patterns among individuals, this approach is likely more robust and adaptable compared to previous processing techniques tested on limited subjects.

The signal quality may be reduced as it is captured through a cap rather than sensors implanted in the brain. Nonetheless, the research demonstrated state-of-the-art performance in EEG transcription, surpassing previous benchmarks.

“The model excels in matching nouns compared to verbs. However, in the case of adjectives, we noticed a tendency towards synonymous sets rather than precise translations,” noted Duan.

“We attribute this to the brain exhibiting similar wave patterns when processing words with akin semantic meanings.” Despite the challenges, he emphasized that the model produces significant results by aligning keywords and constructing similar sentence structures.

Currently, the translation accuracy score on the BLEU-1 scale stands at around 40%. The BLEU score evaluates the similarity of machine-translated text to a set of high-quality reference translations, ranging from zero to one. The goal is to enhance this score to approximately 90%, similar to traditional language translation or speech recognition programs.

This study builds upon earlier brain-computer interface technologies developed by UTS in collaboration with the Asian Defence Force, as demonstrated in the aforementioned ADF video, utilizing brainwaves to control legged robots.

Visited 2 times, 1 visit(s) today
Last modified: February 12, 2024
Close Search Window
Close