¡SUPEROFERTA! Analiza deepfakes por solo 2,5€. Solo hasta el 30/04.

Met: How to know what you type by measuring the magnetic and electric fields of your brain

por
18 de abril de 2025
Compartir artículo
Goal: How to know what you type by measuring the magnetic and electric fields of your brain
There is still much we do not know about the brain of human beings, but the lines of research are growing every day and the results are more spectacular. Five years ago I dedicated an article to the subject of “Robo-Rats“, where the objective was to achieve control of the free will of a rat by manipulating electrical signals in the brain. The full article can be found here: “Happiness as a form of security by being a rat robot“. In the end, our brain is a hotbed of connections that generate electric and magnetic fields that can be measured, and also injected.
Knowing that when our brain makes a decision, for example pressing a button with the right hand or with the left hand, two different electrical signals are generated, it is then possible to know a little before this happens, which physical decision a person is going to make. The goal in the robo-rat experiment was just the opposite, to influence the decision by generating activity in the right brain area.
With the advent of Artificial Intelligence, it is possible to train more complex models that are no longer binary, such as “left or right” in the example I described at the beginning, and more complex and surprising things can be done. One of them, which is a clear line of research, is to know what you are thinking, or what you are seeing. In this case, using functional magnetic resonance imaging (fMRI) you try to reconstruct the image of what a person is seeing. Basically, a person is shown many images, and the brain fMRIs associated with the moments in which the images were shown are recorded.
Next, you are shown an image similar (or not) to the ones used to train the model, and the fMRI image of your brain is captured. With that data a using a Diffusion Model is launched that uses the fMRI data to generate a resulting image, and surprisingly it approximates what you are seeing. The complete process can be found in the article “How to reconstruct the images in your head by means of your brain activity and Stable Diffusion“.

Brain-to-Text

The article I want to talk about today is from researchers at Meta, who last month published a study using magnetic brain field measurements with the Basque Center on Cognition, Brain and Language technology, which allows capturing Magneto Encephalo Graphy (MEG) data in addition to capturing electric fields with Electro Encephalo Graphy (EEG) to train a convolutional neural network.

Suscríbete a nuestra newsletter!

Entérate antes que nadie de nuestras ofertas y novedades

The magic is, if to reconstruct what you are seeing you can use Diffusion Models, to reconstruct what you are writing you can use language models, i.e., Transformers to refine the model, and LLMs to build a much tighter output.
In this way, the model is first trained with people who are asked to type a word that comes up on the screen. At that moment, the MEG and EEG data are captured, and the model, called Brain2Qwerty, is trained using the Convolutional Neural Network and the Transformers, to generate a Language Model that will be in charge of the prediction.
Thus, once the training is finished, the temporal sequences of the MEG and EEG are given as input data to the language model, and the LLM responds with the prediction of what is being written by the subject from whom the data is being received on the screen.
As a result, you can see that quite correct predictions are obtained, and accurate – although not perfect -, since there is impact in the training with the errors, with the types of people, and with the type of words, but certainly enough so that soon we will have more than correct results that could connect the human brain to the information systems through these MEG and EEG measurements that are not intrusive, nor do they require surgery.
At the same time, as you can imagine, this is going to lead us in the future to language models trained so perfectly that they will know what we are typing with remote measurements of technology, that is, as if it were a TEMPEST attack in use, from which it is going to be difficult to protect ourselves. I don’t know whether to go put tinfoil on my hat anymore….
But it also opens other lines of research for brain diseases and people with communication difficulties, which may cease to be a barrier in the future, and thanks to these advances they will be able to communicate fluently, more easily, or at least in a useful way with the environment. Who knows the world we will go to.
Greetings Evil Ones!
Chief Digital Officer of Telefónica and CEO of Telefónica Innovación Digital. www.elladodelmal.com

Más artículos de interés