Created by Materia for OpenMind Recommended by Materia
5
Start Derobotizing Human Thought – a Sustainable Response II
12 January 2022

Derobotizing Human Thought – a Sustainable Response II

Estimated reading time Time 5 to read

And so, with this second part, continues the first article published a few weeks ago, entitled: Derobotizing human thought – a sustainable response I.  

For a better understanding of the ideas that will be developed in this article, the reader is encouraged either to review the first article to get a sense of the plot, or to show interest in the answers to the following questions: What is the name of the subfield of AI that tries to encode certain philosophical theories in machines, with the aim of designing an artificial moral intelligence? And, on the other hand, is there any human emotion that might be worth investigating in depth, for the positive impact it could have on breaking unethical behavior patterns, and/or to understand emotional biases in our decision making?

BBVA-OpenMind-Rosae Pena- desrobotizacion del pensamiento humano-robert-ruggiero

DESIGNING AN ARTIFICIAL MORALITY, ARE WE REALLY AN EXAMPLE OF RIGHT AND WRONG?

If one looks at human behavior in the broadest sense, it can be said that one of the hallmarks of human behavior is the ability to make decisions. And if the right decisions are made, it could be said that this ability is in turn a sign of intelligence. But how does one measure the rightness of a decision?

While many of the decisions we make in our daily lives require a practically unconscious and automatic response, and have no impact other than on the subject who makes them, such as, for example, the act of “I’m sleepy, so I’m going to sleep”. On the other hand, there are the decisions, which require a more sophisticated type of reasoning, since their impact varies both in terms of the number of people affected in the specific context in which they are made, and in that they are often subject to phenomena that may be considered uncertain or unlikely. An example of this type of decision is the one the doctor Koettlitz once had to make) (a story described in detail in the first part of this two-part set of articles), and on whose success or failure the lives of many Arctic explorers depended. There are also two other current representative cases of this type of decision making: the measures that politicians and other stakeholders are taking to tackle the Coronavirus pandemic, as well as to mitigate the effects of climate change in the short and long term. 

Machine ethics is about “adding an ethical dimension to machines. Credit: Pixabay

The interesting thing is that such an important field of human behavior as decision making could also be affected, like many others, by the introduction of artificial intelligence systems, in the not too distant future. This would involve the introduction of intelligent agents equipped with, or to use technical jargon, coded, in such a way that they possess an ethical dimension. Thus, concepts such as: “machine ethics, “machine morality”, or “artificial morality”, among others.  One definition of machine ethics is offered by Michael Anderson and Susan Leigh Anderson, in their book Machine Ethics (2007), which says that machine ethics is about “adding an ethical dimension to machines”. Or, in other words, it is about exploring the technological and philosophical issues, which would be required to design an artificial morality in intelligent systems, so that these artificial agents could acquire more and more autonomy in their decision making, to the point that a human agent would no longer have to review their work. On certain accounts, we would be dealing with artificial intelligence systems endowed with such a degree of autonomy that they would be able to make their own decisions and function on their own.

So then, how does one go about teaching ethics or morals to algorithms??

For the time being, the effort is being focused on trying to codify theories or principles such as, for example Asimov’s Laws of Robotics, the Kant Code of Ethics, or the principles of utilitarianism, among others. The truth is that the experts in this field have reached neither a consensus agreement nor relevant solutions. Criticisms are generally that many of the ethical problems do not lend themselves to either a single solution or even an algorithmic solution. This fact may in turn be motivated, not because it is not possible to implement a type of morality or ethics at a technical level, but because when it comes to this morality or ethics, human agents still have things to learn to improve.  After all, if even we have not been able to find an adequate solution to conflicts such as those presented throughout these two articles, how then can we be the “teachers of ethics and morals” of these “intelligent” artificial systems?

BBVA-OpenMind-Keith Darlington-The Role of Artificial Consciousness in AI-3
 It may not be enough to provide artificial intelligence programs with common sense to be able to handle unprecedented situations

At this point it also seems entirely fair and prudent to ask oneself if, to achieve this end, these artificial systems will require some form of emotions or, perhaps more accurately put, an understanding of them, together with a theory of mind, an understanding of the semantic context of symbols, or it may even be necessary for them to be present in the world with an “artificial body”, a field that is known as “situated cognition.”

THE PAIN OF DATA: EXPERIENCING REGRET

The Brundtland Report, which defines sustainability or sustainable development as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs,” was signed in Oslo on March 1987. However, as is often the case, these types of definitions deal with abstract concepts that encompass many aspects, but neglect concrete ones.

Thus, it is worth remembering that Dr. Koettlitz’s hubris and arrogance claimed the lives of a few explorers without any accountability. If perhaps there had been an artificial intelligence that had brought Lind’s research on scurvy to the table, the story would have been different and Dr. Koettlitz would have had to answer in court for his decision making. The same is true for the Coronavirus crisis or climate change. For as researchers Celuch, Saxby, Oeding, (2015) make clear in their article titled: The Influence of Counterfactual Thinking and Regret on Ethical Decision Making: “if the transgressors of ethics could experience this feeling of “regret” for a bad decision making, before having executed it, perhaps they would have acted differently.” It seems, then, that one path towards this much talked-about sustainability could be to train ourselves in this capacity to experience in an imaginary way the pain that we could cause others with our decision making. For the time being, emotional biases are the least studied, and therefore seem to be the least understood of all, making them in turn the most dangerous to date. 

BBVA-OpenMind-Keith Darlingon-Human level Artificial General Intelligence AGI
One path towards this much talked about sustainability could be to train ourselves in this capacity to experience in an imaginary way the pain that we could cause to others with our decision-making

On a final note, perhaps it is time for each of us to learn from the precision and perfection with which a robot performs its tasks, and this objective should be almost the ultimate goal towards which our lives should be directed. I say this because it is only at the end of our lives that we can take for granted whether we have learned enough to polish our contradictions, and our mistakes. Indeed, this perfection that is equivalent to sustainability does not have to do with a robotization of the human, but rather this end described above should be paired with the motivation of wanting to possess the heart of the most human, which would ultimately be equivalent to “feeling the pain of others.”

Rosae Martín Peña

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved