In recent decades, neurotechnology has evolved from a basic technique for brain-imaging to a complex discipline focused on the observation and manipulation of the mind. This research field is entering a new, unknown phase, where thought-reading and telepathy could become a reality. And just as digital giants like Facebook are joining the race to create brain-machine interfaces, experts are calling for a debate on the moral dilemmas of neurotech. They caution against gung-ho technological optimism — the question is no longer what can we do but what should we do.
American company Neuralink, founded by the millionaire Elon Musk and partially financed by Microsoft, has an ambitious plan to connect human minds to the internet through fine wires embedded in the brain. Neuralink’s initial goal is to distribute this brain-machine interface for therapeutic purposes — for example, so that amputees can control bionic prosthetics —, but their long-term aim is to improve human capabilities by achieving “symbiosis with artificial intelligence”, according to Musk.
Neuralink’s first tests with human subjects are scheduled for mid-2020, while the transhumanist project will require many more years to take off, if it takes off at all. However, it’s already time to ponder what possible side effects the company’s planned interventions could have on users’ brains: Could they become susceptible to brain-hacking? Or to a breach of intimate neural data? What about a change of personality? These are all real concerns in what Marcello Ienca, a bioethicist at the Federal Polytechnic School of Zurich (Switzerland), has dubbed the phase of “neurocapitalism“.
Facebook, meanwhile, revealed in 2017 their aim to develop a brain wave sensor that would allow users to type 100 words per minute using only their thoughts (this is generally slower than touch-typing on a mechanical keyboard, but faster than writing on a smartphone screen). Ultimately, Mark Zuckerberg’s company wants to market a non-invasive electroencephalography headband which may be used to control music applications or to interact with virtual reality systems.
The future of brain machines
Some analysts believe that investors have hyped expectations for the future of neurotechnology, as there is still a lack of neuroscientific knowledge and non-invasive technology required to achieve the proposed short-term goals. However, these goals aren’t science-fiction, either: advances in recent years, for example in neuromarketing or neurogaming, are proof of the surprising speed at which the field is evolving — and this speed will only increase with the enormous investments of these companies.
In 2019, Facebook already announced their first results in collaboration with the University of California, San Francisco: the development of “voice decoders” which can decipher simple communications by analyzing firing neurons beneath the skull. Their study recruited patients with epilepsy who were prepping for invasive surgery — the research team showed that, with an implant, they could read the patients’ mind and distinguish one of several possible answers to the question “What musical instrument do you prefer to listen to” (for example, “violin”).
The results are promising from a scientific and technological standpoint, but alarming from a social one. Just weeks before the publication of that study, the United States Federal Trade Commission (FTC) slammed Facebook with a $5 billion fine for allowing the consultancy firm Cambridge Analytica to create political profiles of the social network’s users without their consent.
What privacy or security guarantees are available to users who allow these companies into their brains? For now, none — that’s the problem experts are warning about. One such expert is Nita Farahany, professor of neuroethics at Duke University. In an interview with MIT Technology Review, she says: “To me the brain is the one safe place for freedom of thought, of fantasies, and for dissent. We’re getting close to crossing the final frontier of privacy in the absence of any protections whatsoever.”
The declaration of “neurorights”
Activists like Marcello Ienca and Rafael Yuste, a neuroscientist at Columbia University, agree: the political and ethical debate is lagging behind technological development. They urge scientists, technologists and policy-makers to anticipate possible risks and to draft specific legislation. In their view, a series of neurorights should be enshrined by law to safeguard users and to guide the development of brain technology towards a future that is beneficial to all.
In an article for the academic journal Life Sciences, Society and Policy, Ienca and his colleague Roberto Andorno suggest four new human rights for the new age of neurotechnology. First, the right to cognitive liberty, which would allow people to freely decide whether to use novel brain-machine interfaces, for example, when employers or governments request it. Second, the right to mental privacy: to choose when to share neural data and under which conditions. Ienca foresees the possibility of eavesdropping thoughts, of involuntary self-incrimination, or the buying and selling of neural data, a particularly pressing concern since the Cambridge Analytica scandal.
Third, Ienca and Andorno propose a right to mental integrity, which protects users against physical or psychological harm caused by neurotechnologies. And finally, the right to psychological continuity, which safeguards personal identity in a world where machines are capable of altering personality without prior consent. Rafael Yuste’s proposals — laid out by the NeuroRights Initiative group that he leads at Columbia University — mostly overlap with the previous ones, but they include two additional rights: the right to equal access to mental augmentation and the right to protection from algorithmic bias and discrimination.
In an interview with the newspaper El País, Yuste points out that the moral responsibility lies, first and foremost, with technologists and neuroscientists like him, who research how the brain works and how it may be manipulated. Yuste has a realistic view of what neurotechnology may achieve in the coming years, and he fears that the future will catch us unprepared. “We have to go directly to society and to policy-makers to avoid abuse,” he says. “We have a historical responsibility. We are at a time when we can decide what kind of humanity we want.”