The current generation of AI has transformed the world we live in. In the last decade, deep learning algorithms have enabled AI systems to perform pattern recognition tasks, like speech and image recognition, as well as or better than humans. However, there is general agreement that this technology is not, on its own, going to be enough to deliver human-level intelligence . For whilst AI algorithms may be more powerful than humans, they understand very little about what they are doing. Some prominent AI researchers say that understanding is necessary because even if an AI program can solve a problem as competently as a human they may not have a broader context to judge the value of their solution. In other words, they are currently incapable of attributing meaning, in a human sense, to what they are doing. This is more of a problem of consciousness rather than intelligence. In this article, I explain the role of machine consciousness, for improving AI systems.
There is a difference between intelligence and consciousness. In simple terms, intelligence is the ability to solve problems. So, for example, if an AI chess program can beat most humans, then it’s reasonable to attribute intelligence to that AI program– since good chess playing is conceived as requiring intelligence if played by a human.
It’s a different matter when it comes to consciousness because it’s very difficult to find an objective definition that captures the essence of consciousness. Some say consciousness is that state that we all feel when we are awake as opposed to when we are sleeping but this won’t do because dreaming is a state of consciousness. Furthermore, this is too narrow a definition because it says nothing about what happens to us all every minute of our lives. We all know from our own experience what consciousness is but it’s very difficult to define. It reminds me of what the jazz trumpeter Louis Armstrong said when he was once asked to define jazz. He said, “if you gotta ask, you ain’t ever gonna know”. It’s the same for consciousness.
We all struggle with a definition but, we all know what it feels like when we jump into a cold swimming pool, or when we taste that ice-cold water on a very hot sunny day, or the elevation we feel when we hear that golden oldie on the radio that we had almost forgotten. We can envisage many good, bad, and neutral experiences that help create the reality of consciousness. We all have inner subjective experiences every day of our lives – philosophers call these experiences qualia.
These qualia experiences bring meaning to our lives but can an artificial consciousness ever experience the meaning of reality in the same way? When we think of a horse, for example, all our senses of vision, sound, smell, touch, and so on, are brought together in our minds to bring memories of a horse. We visualize a four-legged furry animal, running through fields, making neighing sounds, and having a distinctive smell. We can visualize many other horse related experiences such as pulling a carriage, in a stable, and so on. These conscious experiences contribute to our “common sense” understanding that has emerged from our conscious experiences. There are AI systems, like CYC, that can capture this human common sense knowledge. CYC knows these facts and relationships about a horse but cannot attribute meaning and understanding in the same way as we humans do.
Does it matter? Yes, very rarely, because AI programs sometimes have difficulty handling new and unexpected situations. For example, when we drive a car we use our flexible understanding of reality in the world to deal with unusual situations which we may never have encountered before – unlike an AI program that would use machine learning algorithms to decide from learned previous driving examples. Most of what we do whilst driving is routine and easily replicated by AI. But, imagine being hit by an earthquake whilst driving that caused road subsidence during my journey. My instinct would tell me, subconsciously, to slow down. I would probably take other instinctive actions like avoiding being on or near a bridge – since there could be the possibility of the bridge collapsing due to gravity. Common sense knowledge may not be enough because there are infinitely many possible ways that such an event could play out so it could never all be explicated.
What would an AI program do in such a situation? You may think it may be possible to add more learned cases to its knowledge base, but how can it do this if such cases have never been previously encountered? The problem is that the AI program would have to act and reason with an un-encountered situation. It may be an extremely rare situation but it could happen. Furthermore, rare events of this kind are becoming more common. Nicolas Taleb, a mathematical statistician, has studied the concept of very rare events called “black swan” events in the aftermath of the world banking crash in 2008. In his book, called the Black Swan , he shows how such rare events having catastrophic consequences are rationalised inappropriately because such events are becoming more common. His book was given the name because observing a black swan was believed to be impossible – all swans were thought to be white. It’s hard to disagree with his claim. For example, who could have foreseen the Covid-19 pandemic. A black swan event if ever there was one.
What this means is that AI programs having common sense may not be enough to deal with un-encountered situations because it’s difficult to know the limits of common sense knowledge. It may be that artificial consciousness is the only way to ascribe meaning to the machine. Of course, artificial consciousness will be different to the human variant. Philosophers like Descartes, Daniel Dennett, and the physicist Roger Penrose and many others have given different theories of consciousness about how the brain produces thinking from neural activity. Neuroscience tools like fMRI scanners might lead to a better understanding of how this happens and enable a move to the next level of humanizing AI. But that would involve confronting what the Australian philosopher, David Chalmers, calls the hard problem of consciousness – how can subjectivity emerge from matter? Put another way, how can subjective experiences emerge from neuron activity in the brain? Furthermore, our understanding of human consciousness can only be understood through our own inner experience – the first-person perspective. The same would apply to artificial consciousness. It will only be accessible to us from the third-person perspective. This raises the question of how will we know whether a machine has consciousness? I will address this issue in more detail in my next article.
 Marcus, G. and Davis, E. Rebooting AI. Building Artificial Intelligence we can trust. Pantheon Books, 2019.
 Taleb, N. N. The Black Swan: Second Edition, 2010.