Data generated by your internet browser, by your geographical location, by filling out online forms or by typing on your keyboard can reveal very private information, such as your sexual orientation or your political ideology. Imagine what information may be gleaned with access to your thoughts. The development of brain-machine interfaces forces us to consider this issue, as these devices are capable of connecting computers directly to the brain.
Science and neurotechnology haven’t come far enough to enable mind-reading, but already there are consumer-grade devices which can register delicate neural signals. Electroencephalography (EEG) headsets, for instance, are used in marketing studies to analyse emotions and unconscious reactions to certain products or events. “These mental data can be processed to reveal critical information about a person,” says Pablo Ballarín, cofounder of the cybersecurity firm Balusian.
The nature of neurorisks
Neurotechnology’s security problems aren’t new: there could be cases of harassment, organized crime or personal data traffic, just like those already happening in other digital sectors. However, what is new is the nature of neural data. As these are generated directly in the brain, they can encode sensitive medical information and clues as to our identity and the intimate mechanisms guiding our personal choices.
In order to demonstrate the realities of this risk, researchers have attempted to hack commercially available neurotech. Their goal is to extract information which may be useful to cybercriminals, thus highlighting potential security flaws. Teams of scientists and technologists have demonstrated that it’s possible to plant spyware in a brain-machine interface—specifically, one designed to control video games with the mind—that enables them to steal information from the user.
By inserting subliminal images in the videogame, hackers were able to probe the player’s unconscious mental reaction to specific stimuli, such as postal addresses, bank details or human faces. This way, they were able to glean information including a credit card’s PIN number and a place of residence.
Ballarín, who is a telecommunications engineer and a cybersecurity expert, has tested a few devices himself. He hacked a brand-name EEG headset and managed to intercept the neural data sent by the device to a paired cell phone. “If you can process these signals, you can get information regarding disease, cognitive capabilities or even a user’s tastes and preferences, which may be something quite personal, like sexual preferences that you wouldn’t even discuss with a partner,” he warns.
In the worst case scenario, a bidirectional interface—that is, one that not only reads brain signals but also emits them, for example in the form of nerve pulses (to a prosthetic arm, to a wheelchair or to the nervous system itself)—may be hacked to physically harm the user or another person. Already in 2007, doctors had to switch off all wireless connectivity from a pacemaker belonging to Dick Cheney, then vice president of the United States, to prevent a possible murder attempt by hacking of the device.
Safeguarding mental privacy
Measures to safeguard neural data must be both technological and political. There is international regulation in place, namely the General Data Protection Regulation (GDPR) in Europe, which in theory limits the treatment and sale of mental information—like any personal data— to preserve privacy. However, companies don’t always explain how they anonymize data. Even when there is transparency, Ballarín points out that “it’s easy to track anonymized information back to specific people”.
In an article published by the journal Nature and titled “Four ethical priorities for neurotechnologies and AI”, Columbia University neurobiologist Rafael Yuste and colleagues suggested three concrete measures. First, to prevent the traffic of neural data, they propose opting-out of sharing such information should be the default. Only with express consent from individual consumers should companies be allowed to hand this information over to third parties.
Even then, the authors recognize that data volunteered by some users may be used to draw “good enough” conclusions about others who place greater value on their privacy. “We propose that the sale, commercial transfer and use of neural data be strictly regulated. Such regulations—which would also limit the possibility of people giving up their neural data or having neural activity written directly into their brains for financial reward—may be analogous to legislation that prohibits the sale of human organs,” Yuste and his colleagues write in the article.
Finally, they suggest technological measures, such as blockchain and federated learning, to avoid the processing of neural signals in centralized databases. Along these lines, a different group of researchers from the University of Washington has suggested that neurotechnological devices perform an in-situ separation of brain wave components before transmitting them. This way, a brain-machine interface can limit the information relayed to its control device (usually a paired cell phone or computer) and only transmit information that is relevant to the task at hand.
For instance, an EEG sensor designed to control a wheelchair would only transmit the component of brain waves which encodes information pertaining to movement intent, withholding other components related to, say, emotional sensations. By limiting the storage and transmission of raw neural data, opportunities for criminals to hijack useful information are limited as well.
Having said that, external researchers have pointed out that this technique places high performance demands on the neurotechnological devices themselves, as they will need processing capabilities in addition to their brain wave sensors. Besides, restricting access to the raw signal limits opportunities for development of third-party software.
The gauntlet for solving neurotech’s privacy issues has been thrown. According to Ballarín, cybersecurity solutions must be delivered through international regulation; however, legal measures are always late to “solve old problems”, so manufacturers and developers must anticipate the blind spots in their products. Ultimately, it is consumers who will pass judgement through their choices—but for that they must be informed of the risks and know what is in their best interests.