Created by Materia for OpenMind Recommended by Materia
Start “In the Future We Will Link our Brains with Artificial Intelligence Systems”
08 July 2019

“In the Future We Will Link our Brains with Artificial Intelligence Systems”

Estimated reading time Time 6 to read
  • “It’ll be interesting if the technology gets so good that you don’t know whether we will rebel against it or whether we will accept it even more”
  • “Humanity has to be techno-optimist rather than techno pessimist”

In Kevin Warwick’s future, brain implants will enable humans to develop new senses or communicate with each other just by thinking. Not in vain, this Emeritus Professor at Coventry and Reading Universities has been a staunch advocate of the potential benefits that could emerge from hacking the human body, not exclusively from a medical perspective. Infallible memory, brain-to-brain communication, body extensions in objects located hundreds of miles away or multidimensional thinking are just some of the possibilities that the professor has imagined and worked on in his laboratory —he implanted a chip in his arm in 1998, becoming the cyborg he needed to conduct all sorts of tests.

Warwick is aware of the dystopian scenarios that could arise from linking biology and machines, but argues that humans are also changing from an ethical point of view and no longer have the same certainties as before.  What if we could predict who’s going to commit a crime before they did using artificial intelligence? Shouldn’t we stop them before, even though according to our current laws you can’t detain anyone until they actually do something?

In the context of the event organized by OpenMind and Diario Sur to talk about artificial intelligence in the data era, OpenMind sat down to have a chat with Kevin Warwick about artificial intelligence and technologically enhanced humans. 

Q. Back in 2013 you answered some questions about robots and artificial intelligence for OpenMind. One of your thoughts was that maybe in 10 to 15 years we would be able to build a robot with a brain the same size as a human brain. So, 6 years have already passed, do you still think that we could achieve that milestone by 2025? How have things changed or developed in the last 6 years?

A. Well, I don’t think yet that anybody has built a brain the same size as a human brain, either biologically or technologically. But I think that, technologically, so as not with biological brain cells but with computer, it has advanced, as artificial intelligence has advanced tremendously, particularly in the commercial sector, in the last five or six years. I think, as well, the understanding and commercial opportunities of using AI have advanced dramatically in the last few years, but it’s become much more pervasive, is much more around us, it is being much more used without being realized that it’s being used.

Q. Has it become harder to differentiate between human intelligence and artificial intelligence?

A. I think it has become a lot harder to define what is human intelligence and what is artificial intelligence. Humans now use artificial intelligence a lot more than they used to and that has swung both things. I think AI has been developed so it does, first of all, interact with humans. Alexa, for example. A lot of people have that. It communicates like a person. It has a memory, like a person. Many people see AI has become more human like in many ways the introduction of Alexa or other systems that you can interact with. People regard them…, they know know it’s technology, but it’s like a person. So there’s “Alexa, tell me this. Could you do that? What’s this?” It’s almost like a slave person, to be honest. But at the moment, the interesting thing is that you know that it is technology. We could get into in the few years ahead this uncanny valley when the technology becomes too humanlike and we stop to think of it as technology or we worry about it. For humans if it’s physical and that it looks too much like a human, we can tend to rebel against it, and so we don’t want that as we are uncomfortable with. It’ll be interesting if the technology gets so good that you don’t know whether we will rebel against it or whether we will accept it even more. I don’t know, until we get there, until we flip over and the Alexa has become so good that it’s like a human, but it’s in a box. How are we going to treat that, I don’t know. And that’s the way it’s going at the moment.

Kevin Warwick in the event "Artificial Intelligence: present and future" that took place in Málaga. Credit: Diario Sur
Kevin Warwick in the event “Artificial Intelligence: Present and Future” that took place in Málaga. Credit: Diario Sur

Q. When the time comes, do we need a new an improved Turing test in order to tell robots apart from humans with more certainty?

A. Well, the Turing test, of course, is about just communication like a human, and getting a machine to communicate as a human does. It doesn’t really deal with other aspects of intelligence. I think it’s a fantastic test and great fun to look at what’s going on. And I think therefore, yes we could have a new form of Turing Test, whether it’s the Turing Test, then I don’t know. 

But really, looking at the whole entity, looking at human intelligence, there’s all sorts of questions. Now, are we going to have machines with the same sort of intelligence as a human —which is much more than the Turing Test? That would be interesting, to have a test like that, human level or human type intelligence. The problem is if we get to that stage with AI, it’s it’s very dangerous, because okay, it’s level now, but human intelligence will improve slowly if we don’t do anything more, but AI is going to shoot ahead, so within a few years it’s going to be an awful lot better than human intelligence. And then we’ve got the problem of who’s controlling it. Because if we’re thinking stupid things and making stupid decisions and the AI thinks, then is it going to put up with us still controlling things?

Q. What future is more likely to happen: a future with robots as intelligent as (or more intelligent than) humans or a future with enhanced humans thanks to robotics?

A. For me, humans enhancing themselves with AI is going to be the way we have to go. The alternative —having machines that are more intelligent than humans— is a very,very dangerous scenario. Who’s going to make the decisions? It will be the machines, not the humans. And where we’re going to end up? We may not even be around, as a result of that. So I think upgrading humans is the way we’ve always gone, so far with physical things such as aeroplanes, we use that to improve how we travel, such as communication systems, we use that to improve how we communicate. So I think with AI, ultimately, we will link much more closely with it and use that to improve how we are. If we don’t, then that’s not good for the human race.

Q. If we take a look at that possible future with enhanced humans, most probably not everybody will be able to afford to “improve” himself or herself with robotics or implanted chips. What can we do about the inequality issues that could arise?

A. Inequality is an interesting one because, as humans, we go for it. We can put it in the context of being better, but what does that mean with a lot of technology? As long as we can communicate in a better way —that becomes the critical thing for us to do— it doesn’t necessarily make us a better human. In fact it might change how we are, give us abilities that most humans don’t have.

I don’t know what we do with the inequality. It’s always happened with human development. Some people have new technology, even the wheel or the fire, and they make use of that, with weapon systems. Us Europeans have been pretty awful in the past using that technology to go and exploit other countries. I’m British, we are a nation of exploiting other countries to do that.

So my suspicion is we will probably as a society do a lot more of that. We will use the technology, link more closely with it, and allow it create inequalities, we will use that for financial and political gain, rather than try and put a level playing field out there. It’d be nice to think we’ll spread the technology out. But I doubt that will happen, realistically.   

Kevin Warwick in the event "Artificial Intelligence: present and future" that took place in Málaga. Credit: Diario Sur
Kevin Warwick in the event “Artificial Intelligence: present and future” that took place in Málaga. Credit: Diario Sur

Q. In spite of everything, do you think humanity should be techno-optimist?

R. I think humanity has to be techno-optimist rather than techno pessimist. The techno-pessimist future is very easy. You know, when it gets out of hand or ultimately the technology takes over, the Terminator scenario. We have to say, you know, that that is not completely out of it, if we’re talking about the use of AI. The “I” means intelligence, and if you’re developing machines that are far more intelligent than you are, the networks have all sorts of advantages… It’s a dangerous direction to go.

I would love to see myself as a techno-optimist. The Terminator scenario is not going to happen, and that we’re going to upgrade ourselves with technology, link with it, so literally link our brains with AI systems, we’ll communicate in whole new ways, we will understand the world around us in a much more complex way, maybe we’ll understand in more dimensions —which is an advantage of AI— and we’ll be able to travel into space, which we can’t at the moment because we think in 3-D, it really restricts our thinking.

I think it would change us in ways that will be different, positive in a scientific way, but I don’t think better as humans, as it will make us very different.



Sara González for OpenMind



Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved