Artificial intelligence to read your thoughts: closer every day

Technology to read the mind? We might think that the mere idea enters fully into the field of science fiction. But a team of Russian scientists, composed of researchers from both the Neurobotics company and the Moscow Institute of Physics and Technology (MIPT), have designed a technology capable of reconstructing the images that a person is visualizing at that time. And to do it, in addition, without the need for aggressive brain implants. In fact, this technique only requires the use of typical electrodes, so that they are capable of elaborating the electroencephalography (EEG) of the subject. Based on this information, scientists have resorted to neural networks so that they ‘translate’ EEG in real time and can recreate the visual information that the brain is capturing.

In the words of Grigory Rashkov, junior researcher at MIPT and programmer at Neurobotics: «The electroencephalogram is a collection of brain signals recorded from the scalp. The researchers believed so far that studying brain processes through the EEG was like trying to find out the internal structure of a steam engine by analyzing the smoke left by a steam train: we didn’t expect it to contain enough information to even partially rebuild an image observed by a person. However, it turned out to be quite feasible».

The three phases of the experiment

In a first phase of the experiment, the researchers asked the study subjects to watch fragments of YouTube videos from five different categories, detecting that the brain wave patterns were different for each category. In the second phase, the researchers developed two neural networks: one specialized in the generation of random images from the «noise» of the videos (that is, the granulation that appears in a frame when it loses sharpness), and another responsible for generate a similar «noise» from the data provided by electroencephalographies.

In the last phase, both networks were trained to collaborate with each other, making it possible to convert the noise derived from the EEG into real images, similar to those observed by the volunteers in the first phase and which had been processed by the first neural network. The result was that, in 90% of cases, the system generates images that can be easily categorized.

A door to more affordable and less invasive brain-computer interfaces

Vladimir Konyshev, director of the NeuroRobotics Laboratory at the MIPT said: «This project focuses on creating a brain-computer interface that allows patients who have suffered a stroke to control an arm exoskeleton for neurological rehabilitation purposes, or for paralyzed patients to be able to drive, for example, a chair of electric wheels».

Konyshev also analyzes what impact this progress has on projects such as Neuralink, the brain-computer interface that Elon Musk is developing: «The invasive neural interfaces designed by Elon Musk face the challenges of complex surgery and rapid deterioration due to natural processes: they oxidize and fail in a few months. We hope that over time more affordable neural interfaces can be designed that do not require of being implanted».

With information from:

Leave a Reply

Your email address will not be published. Required fields are marked *