Meta has taken an important step in the development of brain-computer interfaces with Brain2Qwerty, an artificial intelligence-based system capable of converting thoughts into text. This advance, which relies on technologies such as magnetoencephalography (MEG) and electroencephalography (EEG), seeks to facilitate the communication for people with motor or speech difficulties.
The research was carried out by the Fundamental Artificial Intelligence Research Center (FAIR) of Meta, in collaboration with the Basque Centre for Cognition. In it, 35 volunteers participated in trials that recorded their brain activity as they attempted to type sentences on a keyboard.
How does Brain2Qwerty work?
Brain2Qwerty uses a deep learning model divided into three stages: first, a convolutional neural network (CNN) extracts patterns from brain data collected by EEG or MEG; then, a transformation module analyzes the sequences and anticipates words rather than individual characters; finally, a language model improves the precision of the generated text.
During testing, volunteers typed sentences like “processor executes instruction” while their brains sent signals that Brain2Qwerty’s AI interpreted in real time.
Decoding the structure of language in the brain

One of the most interesting findings of the study is that the brain does not produce words in isolation, but rather follows a hierarchical process: first it generates the context of the sentence, then it loads its meaning and finally it translates these concepts into syllables and letters.
This knowledge has allowed researchers to improve AI models, making the system capable of better understand the meaning of thoughts and not just identifying individual characters.
Challenges and Limitations of Brain2Qwerty
Although the results have been promising, the system still faces significant challenges. Accuracy depends largely on the imaging technology used. MEG has been shown to be more effective, with an error rate in predicting traits of 32% on average, while the EEG, with lower spatial resolution, has reached a 67% error.
Another problem is the size and cost of the equipment. In its current state, the technology requires a 1000 watt machine. 500 kg and a cost of 2 million dollars, which limits its application outside the laboratory.
Can Meta's Brain2Qwerty be applied in everyday life?

One of the main challenges of Brain2Qwerty is to convert this technology into a practical and accessible tool. Meta He has indicated that one of his priorities is the miniaturization of the hardware, which would allow its use outside of controlled environments.
Furthermore, although the system has been shown to be able to interpret thoughts, it does not do so in real time. It only does so after the person has finished the sentence. This latency could make it difficult to implement in situations of fluid communication.
Privacy and ethical considerations
The fact that artificial intelligence can interpret thoughts raises important questions about the privacy. Meta has assured that Brain2Qwerty only detects voluntary intentions to write and not spontaneous thoughts. But experts warn that the development of this type of technology requires a clear regulatory framework to prevent misuse.
The future of this technology will depend on the ability to solve these challenges and ensure that it is used ethically. For now, the focus remains purely research-based, although the development of more accessible and more accurate devices could open up new opportunities in the field. assisted communication.
Brain2Qwerty's breakthrough represents a step towards merging mind and machine without the need for invasive devices. The ability to translate thoughts into text in a natural way could change the way people interact with technology in the future, although there are still many challenges. obstacles to overcome before this innovation becomes an everyday reality.