Created by Materia for OpenMind Recommended by Materia
Start Eight Oddities of Artificial Intelligence
20 September 2022

Eight Oddities of Artificial Intelligence

Estimated reading time Time 5 to read

Nowadays, much of humanity’s hopes are placed in the development of Artificial Intelligence (AI), which is seen as a way to cure diseases, improve diagnostics or care for the environment. However, there are also many fears motivated by the possibility that the algorithms could end up escaping human control.

In fact, some intellectual figures of the stature of the late physicist Stephen Hawking have reflected on the apocalyptic risk of these technologies, a warning that has been joined by others such as tech magnate Elon Musk and UN High Commissioner for Human Rights Michelle Bachelet. While the debate continues, here is a series of developments in recent years that will neither save the world nor bring about its demise, but rather serve to entertain us with the more curious side of AI.

The algorithm of sexual orientation

Can the sexual orientation of people be detected by their appearance? A team of researchers from Stanford University (USA) raised a controversial issue with the announcement of a study that described an algorithm supposedly capable of determining whether a person is heterosexual or homosexual by analysing their photos posted on an online dating network.

The researchers took the information from the users, publicly available on this website, and trained a neural network to recognize sexual orientation from their features and their grooming. According to the study, the algorithm guessed correctly in 81% of cases for men and 74% for women, while a group of human evaluators only got 61% and 54% correct, respectively.

LGBT rights advocacy organizations protested what they considered “junk science,” while the researchers argue that their intention was specifically to warn of the risk of the loss of privacy with new technologies.

The bot that became racist on Twitter

When in March 2016 Microsoft researchers created a Twitter profile for Tay, their newly created AI conversation bot, they did not imagine that the experiment would last scarcely 16 hours.

Great hope is being placed in the development of AI, but there are also many fears that it could end up escaping human control. Credit: Gerd Leonhard

This was the length of time it took for the creators of Tay to be forced to disconnect their creature from the Internet, when they discovered that it had become harshly sexist, racist and xenophobic. In its more than 96,000 tweets, Tay went on to insult ethnic minorities, praise Hitler and deny the Holocaust, without neglecting to make crude comments.

While Tay’s case was particularly striking, it was not the only one. In 2021, a group of researchers found that AI systems tend to generate sexualised images of women, but more professional representations of men. Another experiment in 2022 showed that an AI-animated robot applies racist and sexist stereotypes when manipulating a series of blocks with human faces: when asked to choose the criminal, it tends to choose more often a black man, a Latino in the case of a caretaker, or a man if asked for a doctor. Experts warn that these biases, far from being anecdotal, are problematic, and that work must be done to prevent them.

The two bots that invented their own version of English

One of the challenges the experts are studying is how AI algorithms process and interpret natural language. Something that humans learn to handle from childhood is a challenge for machines, as can be attested to by those in charge of Facebook’s Research Laboratory in Artificial Intelligence.

One of the challenges the experts are studying is how AI algorithms process and interpret natural language. Credit: Geralt

Researchers connected two bots—Alice and Bob—to each other to train their abilities in conversation and negotiation, but they soon discovered with stupefaction that the machines were communicating between themselves with sequences in this style: “balls have zero to me to me to me to me to me to me to me to me.” Facebook explained that the bots had not been programmed to adhere to the syntax and grammar rules of natural language and therefore had used English vocabulary to construct their own language code with which they were able to easily understand each other.

Deduce the recipe from a photo of the food

How many times have we wanted to get the recipe of a dish that we liked? But in a restaurant we are never sure if they will be willing to reveal their best-kept secrets. Kitchen enthusiasts can now count on the help of an AI system developed by the Massachusetts Institute of Technology, the Universitat Politècnica de Catalunya and the Qatar Computing Research Institute, which is able to deduce the ingredients and the recipe from a photo of the food.

The Massachusetts Institute of Technology has developed an AI system that is able to deduce the ingredients and the recipe from a photo. Source: Pxhere

After training the system with a million examples, the result is that Pic2Recipe gets the recipe right in 65% of cases. Of course, the ingredients that are not in sight are missing, but for this, the algorithm can use some human help.

He loves me, he loves me not…?

We all know that love is forever… until it is over. The bad thing is that we never know if the current love of our life will really be the one that lasts. At least, that is, until now. A group of researchers from the University of Southern California has collected the conversations of 134 couples during therapy sessions over two years and with all this material has trained an AI system to predict whether or not the formerly happy couple will return to their prior blissful state.

The curious thing is that the machine does not focus on the content of the conversations, but only on the form—features such as intonation, the intensity of the voice or who speaks, when and for how long. And the results are astonishing—the AI system succeeded in 79.3% of the cases, surpassing the 75.6% obtained by a group of human experts who had access not only to the audio from the therapy sessions, but also to the videos.

God in a machine

We have seen countless times in film how an AI takes control of planet Earth and everything on it, becoming a kind of supreme being. A few years ago, a former Google engineer wanted to give AI this divine role, but with the declared intention that this cybernetic god would contribute to the betterment of society.

The protagonist of the story is Anthony Levandowski, a systems developer for self-driving cars who worked at Waymo, a company in the Google conglomerate, before founding his own company—Otto—dedicated to self-driving trucks. Otto was later acquired by Uber, but Levandowski was eventually fired by Uber and denounced for an alleged breach of business secrets by his former employer Waymo.

In 2017, Wired revealed the engineer’s unusual new project: a religious organization called Way of the Future, whose objective was to “to develop and promote the realization of a Godhead based on AI, through understanding and worship of the Godhead, to contribute to the betterment of society.”. Levandowski eventually closed the doors of his virtual church in 2021, but made it clear that he still believes in the premise that inspired its creation.

(Artificial) life after death

One of the most disturbing applications that AI has been exploring lately is to create virtual replicas of deceased people, something we have already seen in the series Black Mirror. In recent years there have been several initiatives along these lines that have hit the media: chatbots that not only imitate the voice of the deceased, but are able to converse just as the real person would have done, or at least that is what they try to do.

La aplicación de la IA en la vida (artificial) después de la muerte va desde réplicas virtuales de personas fallecidas a chatbots que imitan la voz del difunto. Crédito: Kenny Orr
The application of AI in the (artificial) life after death ranges from virtual replicas of deceased people to chatbots that imitate the voice of the deceased. Credit: Kenny Orr

While opinions on this new use of AI can be hugely polarised, and there is an ethical debate about it, tech companies sense that there may be a market. Amazon wants to equip its digital assistant Alexa with the ability to emulate the voice of any person even if they are deceased. In the UK, a woman answered her relatives’ questions at her own funeral, as did actor Ed Asner, famous for the 1970s and 80s series Lou Grant.

Real sex with unreal people 

Sex dolls already have a long history, but instead of the crude inflatable models of yesteryear, modern materials and manufacturing techniques have made it possible to create hyper-realistic representations not only in appearance, but even to the touch. The next step is to equip these dolls with AI, and some companies are already working on this.

One example is the Californian company RealDoll, which has added Harmony to its catalogue of inanimate sex dolls, an AI-powered robotic head that is capable of reflecting expressions on its face, conversing and remembering its user’s sexual preferences. Robot sex is another application of AI that is stirring up strong controversy, but also an interest that does not seem to be waning.        

Javier Yanes


Editor’s note: article updated on September 20th by Javier Yanes

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved