Created by Materia for OpenMind Recommended by Materia
4
Start Whatever Happened to… Mind Reading?
25 November 2019

Whatever Happened to… Mind Reading?

Estimated reading time Time 4 to read

The longing of humans to know what is in the minds of their fellow beings is so irrepressible that a name has been given to something that does not exist: telepathy, which has become a staple of magic performances. And yet, reading the mind is not entirely a fantasy, but something that is, at least theoretically, possible. Thoughts arise from a physical substrate, the electrophysiological processes of the brain, and therefore it should be a matter of technology to be able to decipher them from that cerebral fingerprint. Today there are many research groups that are already following this path.

Recently, the increasing likelihood of being able to read the mind has been causing a stir because of the interest that this effort has aroused in Silicon Valley. The social network Facebook, which promotes research into scanning technologies of brain activity, announced last September the acquisition of the startup CTRL-Labs, whose work focuses on a bracelet capable of picking the electrical signals from the brain on the wrist to control a virtual hand. Meanwhile, Neuralink, a company owned by serial entrepreneur Elon Musk, develops brain implants that act as brain-computer interfaces and whose ultimate purpose, according to the founder of the company, is to fuse the human mind with artificial intelligence (AI). These movements have raised the suspicion of those who fear that even our minds will no longer be a stronghold of privacy.

What is certain is that the first rudimentary technological attempt to know someone else’s thoughts has been with us for nearly a century. The 1930s saw the first use of the polygraph, popularly known as the lie detector, which measures certain physiological parameters. At the end of the last century, Lawrence Farwell developed Brain Fingerprinting, a new and controversial lie detector that directly measures the electrical activity of the cerebral cortex through electroencephalography (EEG).

Electroencephalography for transmitting thoughts

EEG, which uses an electrode helmet attached to the scalp, is a technique commonly used in medicine to diagnose different brain disorders. Starting in the 1970s, before personal computing became popular, it began to explore its potential to allow the brain to communicate with a computer. Nowadays, many researchers continue to use this brain signal acquisition system as a possible tool for transmitting thoughts from one person to another.

Two such researchers, Rajesh Rao and Andrea Stocco from the University of Washington, have succeeded in using the EEG readings from one person (the “respondent”) who looks at one of two lights, corresponding respectively to “yes” and “no”, to inform another subject (the “inquirer”) at a distance from the answers to their questions (similar to the “20 Questions” game) to identify an object seen by the respondent; the inquirer receives the responses mentally through a transcranial magnetic stimulation device that induces the vision of a small flash of light. Previously, Rao and Stocco managed to get a subject to mentally control another to activate the controller of a game that only the first one saw on their screen. More recently, researchers have connected the brains of three people to work collaboratively. In Stocco’s words, what his experiments achieve is “taking signals from the brain and with minimal translation, putting them back in another person’s brain.”

EEG is demonstrating amazing possibilities when combined with AI systems capable of learning to interpret brain signals. In a study yet to be published, a team from the Moscow Institute of Physics and Technology and the Russian company Neurobotics has managed to reconstruct images of videos that subjects are watching, based on their EEG readings. Although the quality of the reconstructions still needs to be improved, the results are impressive. “We didn’t expect the EEG to contain sufficient information to even partially reconstruct an image observed by a person,” said study co-author Grigory Rashkov. “Yet it turned out to be quite possible.” And as co-author Anatoly Bobe also points out to OpenMind, the optimization of the algorithms will improve the results.

Invasive brain implants

In other cases, researchers pick up the signals with invasive brain implants, such as those developed by the company of Elon Musk, or through Functional Magnetic Resonance Imaging (fMRI), a system that detects brain activity by blood flow. With this technique, a team from the University of California at Berkeley demonstrated results similar to those of the Russian experiment in the reconstruction of images thought about by the subjects. The fMRI has also allowed researchers, for example, to identify the music a person is listening to from their brain activity.

However, using fMRI requires expensive equipment into which the participants must enter and remain motionless, sometimes for hours. “The key benefits of non-invasive EEG are the simplicity of use and the availability to a wider group of people and in a wider range of applications,” says Bobe. “Not many people will accept a brain implant, and there is not much activity that one can do inside a tomograph.”

But beyond the possibility of learning someone’s thoughts with a simple reading of their neural activity, and fears about the possible malicious uses, these new technologies could bring enormous benefits against countless medical conditions: the ability to decode cortical activity makes it possible to control robotic prostheses or voice synthesizers. This year, researchers from the University of California at San Francisco have successfully used electrodes implanted in the brain to control an anatomically complete virtual vocal tract with the mind, and also managed to decode a dialogue between two people from their brain signals.

Instead of the voice synthesizers that work by selecting letter by letter, such as the one used by physicist Stephen Hawking, this system would allow users in real time to translate directly into voice the mental orders which control the movement of the larynx, jaw, tongue and lips. According to the director of the study, Edward Chang, “this is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

This is just one example. The reading of brain waves has already allowed a violinist with brain damage to mentally transmit her compositions to a computer, or a quadriplegic woman to control an airplane in a simulator, and work is even being done on the development of thought-driven cars. Mind-reading technologies have the potential to open a world of future possibilities, provided we can safeguard the thoughts we don’t want anyone to read.

Javier Yanes

@yanes68

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved