The British-Irish artist Neil Harbisson is a cyborg. That is how he identifies and how he claims the UK government recognises him, given he was allowed to appear in his passport photograph with the antenna which he had permanently attached to his skull in 2004. His whole life, Harbisson has only been able to see the world in grayscale—he suffers an extreme form of color blindness known as achromatopsia. However, he can hear frequencies associated with different colors through his implant, which also enables him to receive files and calls over the internet and from satellites straight into his brain. Such is his level of dependence and integration with this technology that he no longer considers himself fully human: the antenna has altered his identity.
In this regard, Harbisson is not alone. As the field of neurotechnology has progressed, more people have connected machines to their brains for medical purposes, a practice which has occasionally resulted in deep changes to their sense of self. “It became me, […] with this device I found myself,” is how one woman described the skull implant which halted her epileptic seizures. An array of electrodes on the surface of her bain warned her—signalling a handheld device which she carried at all times—of neuronal activity linked to an impending seizure. She could then take medication or seek a safe location.
In a paper published in 2017, four bioethicists interviewed this woman and other users of the same brain-machine interface, in order to understand how the implant had modified their perceived sense of self. “I don’t believe these interventions change who a person is, who they fundamentally perceive themselves to be,” says neurologist and bioethicist Judy Illes, one author of the study and director of the Neuroethics Canada center at the University of British Columbia (Vancouver, Canada). “They may change many features around identity, but I believe that the self is resilient to the intervention,” she adds.
The report collects accounts from seven patients whose experiences encompass a bewildering range of reactions and feelings linked to the neurotechnological implant. On one extreme is the woman who “found herself” thanks to the device: she was glad for the control over her condition which the technology afforded, and glad for the opportunity to lead an active and social life. “With the device I felt like I could do anything,” she boasted. On the opposite extreme, one patient claimed to have suffered depression due to her newfound dependence on the technology. “[The device] made me feel I had no control. […] it made me feel that I was always different [from] everyone, not just in the moment of the seizure,” she told the researchers.
Questioning Free Will
These experiences are poorly understood, but they pose serious ethical questions which Illes and her colleagues are already debating with patients, regulators and technologists. There is little consensus as to whether brain implants really do change identity. Philosopher S. Matthew Liao, director of the Center for Bioethics at New York University, does believe neurotechnologies have the potential to “radically change our sense of where we come from, what we do and, importantly, who we are”. However, this is a minor point of dissent—all experts agree that brain-machine interfaces can interfere with patients’ minds enough to warrant concern over their sense of agency and wellbeing.
It could be the case that “a patients’ actions are not what they might have been prior to implantation, for example through erratic behaviour or mood that is out of the ordinary”, Illes says: “This raises questions about whether the individual is responsible for those actions”. Yet sometimes, the treatment seeks to directly change a patient’s mood or actions. How is the efficacy and morality of such interventions to be judged?
When brain-machine interfaces are used to treat epilepsy or motor disorders such as Parkinson’s disease, “the question about authenticity [of the patient after the intervention] is less salient,” Illes points out. In these cases, the aim of the therapy is to stop the tremors or seizures caused by the disorder, so any other physiological or psychological change undergone by the patient can be identified as an undesirable side-effect. Among those patients who have received deep brain stimulation (or DBS, which involves electrodes inserted in the skull) as a treatment for Parkinson’s disease, Nature News has reported exceptional cases of people developing hypersexuality, gambling addiction and other impulsive behaviours.
On the other hand, the same implants are already used experimentally to treat psychiatric disorders like depression, obsessive-compulsive disorder and schizophrenia. Illes considers that, in these cases, “what is different is the way in which we evaluate the patient for the intervention, in terms of consent and capacity”. In a focus group with people who had received DBS as a treatment for major depression and obsessive-compulsive disorder, one patient said: “You just wonder how much is you anymore, and you wonder: How much of it is my thought pattern? How would I deal with this if I didn’t have the stimulation system? You kind of feel artificial.”
A Permanent Psychological Change
In that consultation, patients were asked what problems they thought could arise from the use of a so-called “closed-loop” brain-machine interface that regulated itself to treat depression (as opposed to DBS, which is “open-loop” because it requires someone to purposefully switch on the electrical stimulation). The patients warned that such a system could be problematic if it artificially maintained a “constant state of subjective well-being”. It would “prevent them from experiencing a ‘normal’ range of emotions,” reported the researchers who conducted the inquiry: “For example, feeling sad at a funeral”.
There is a dearth of empirical data regarding the onset and duration of psychological changes associated with neuronal implantations. Liao foresees important moral dilemmas when neurotechnology enables scientists to directly modify, plant or erase human memories, as is already possible in experimental interventions with rodents. Memory has an important role in shaping identity, Liao argues. If an individual’s memories have historical, social or legal relevance, then any neuronal manipulation he or she submits to will not only have personal effects, but also consequences to society at large.
In the face of such dilemmas, neuroethicists like Liao and Illes issue recommendations to safeguard patients and the wider population. In a letter published in the medical journal The Lancet, Illes and other colleagues requested a public registry of implantable neurological devices. They warn that some manufacturers have already recalled neurotechnological products from the market due to defects or due to their inability to maintain them, even after these devices were implanted in patients who depended on them. “It’s not only the responsibility of regulators and people like me,” Illes says: “It needs to be internally motivated by the developers in the trenches, those working on the technology itself. It’s everyones’ responsibility to attend to ethical considerations”.