Nowadays, much of humanity’s hopes are placed in the development of Artificial Intelligence (AI), which is seen as a way to cure diseases, improve diagnostics or care for the environment. However, there are also many fears motivated by the possibility that the algorithms could end up escaping human control.
But while intellectual figures such as Stephen Hawking ponder the apocalyptic risk of these technologies, here are a number of recent developments that will neither save the world nor bring about its demise, but rather serve to entertain us with the more curious side of AI.
The algorithm of sexual orientation
Can the sexual orientation of people be detected by their appearance? A team of researchers from Stanford University (USA) raised a controversial issue last September with the announcement of a study describing an algorithm supposedly capable of determining whether a person is heterosexual or homosexual by analysing their photos posted on an online dating network.
The researchers took the information from the users, publicly available on this website, and trained a neural network to recognize sexual orientation from their features and their grooming. According to the study, the algorithm guessed correctly in 81% of cases for men and 74% for women, while a group of human evaluators only got 61% and 54% correct, respectively.
LGBT rights advocacy organizations have protested what they consider “junk science,” while the researchers argue that their intention is specifically to warn of the risk of the loss of privacy with new technologies.
The bot that became racist on Twitter
When in March 2016 Microsoft researchers created a Twitter profile for Tay, their newly created AI conversation bot, they did not imagine that the experiment would last scarcely 16 hours.
This was the length of time it took for the creators of Tay to be forced to disconnect their creature from the Internet, when they discovered that it had become harshly sexist, racist and xenophobic. In its more than 96,000 tweets, Tay went on to insult ethnic minorities, praise Hitler and deny the Holocaust, without neglecting to make crude comments.
According to what the project managers later explained, Tay had been the victim of malicious users who had deliberately led it toward this ideological terrain, taking advantage of the bot’s inability to establish ethical criteria.
The two bots that invented their own version of English
One of the challenges the experts are studying is how AI algorithms process and interpret natural language. Something that humans learn to handle from childhood is a challenge for machines, as can be attested to by those in charge of Facebook’s Research Laboratory in Artificial Intelligence.
Researchers connected two bots—Alice and Bob—to each other to train their abilities in conversation and negotiation, but they soon discovered with stupefaction that the machines were communicating between themselves with sequences in this style: “balls have zero to me to me to me to me to me to me to me to me.” Facebook explained that the bots had not been programmed to adhere to the syntax and grammar rules of natural language and therefore had used English vocabulary to construct their own language code with which they were able to easily understand each other.
Deduce the recipe from a photo of the food
How many times have we wanted to get the recipe of a dish that we liked? But in a restaurant we are never sure they will be willing to reveal their best-kept secrets. Kitchen enthusiasts can now count on the help of Pic2Recipe, an AI system developed at the Massachusetts Institute of Technology that is able to deduce the ingredients and the recipe from a photo of the food.
After training the system with a million examples, the result is that Pic2Recipe gets the recipe right in 65% of cases. Of course, the ingredients that are not in sight are missing, but for this, the algorithm can use some human help.
He loves me, he loves me not…?
We all know that love is forever… until it is over. The bad thing is that we never know if the current love of our life will really be the one that lasts. At least, that is, until now. A group of researchers from the University of Southern California has collected the conversations of 134 couples during therapy sessions over two years and with all this material has trained an AI system to predict whether or not the formerly happy couple will return to their prior blissful state.
The curious thing is that the machine does not focus on the content of the conversations, but only on the form—features such as intonation, the intensity of the voice or who speaks, when and for how long. And the results are astonishing—the AI system succeeded in 79.3% of the cases, surpassing the 75.6% obtained by a group of human experts who had access not only to the audio from the therapy sessions, but also to the videos.
God in a machine
In the cinema, we have seen time and time again how AI takes control of planet Earth and all that inhabits it, becoming a kind of Supreme Being. Now, a former Google engineer seems to want to make that happen.
The protagonist of the story is Anthony Levandowski, a systems developer for self-driving cars who worked at Waymo, a company in the Google conglomerate, before founding his own company—Otto—dedicated to self-driving trucks. Otto was later acquired by Uber, but Levandowski was eventually fired by Uber and denounced for an alleged breach of business secrets by his former employer Waymo.
In September, Wired magazine revealed the unusual new project of this engineer—Levandowski has created a religious organization called Way of the Future, whose objective is to “to develop and promote the realization of a Godhead based on AI, through understanding and worship of the Godhead, to contribute to the betterment of society.” Of course, as in all religions, the stated goal is laudable, but little is known about this “techno-futurist church” and how the project will be carried out.