Our identities, whether we are aware of them or not, seem to be in a period of transition. Thus, we could say that we find ourselves between this intermediate step that the sociologist Zygmunt Bauman has already referred to as “liquid currency”, and what Eli Pariser described in his book The Filter Bubble. The “filter bubble” concept refers to how the Internet, and for that matter, how the algorithms behind it are in charge of deciding what you read, think, and even buy. The Internet seems to provide us with an identity; however, who provides an identity to the conglomerate of algorithms that make up the network of networks?
Paradoxes of the identities of artificial intelligence
To understand how algorithms work, it is useful to know what they are, and to view the Internet as something more than a completely harmless or totally evil tool. Therefore, an algorithm can be understood as a logical sequence of steps to solve a problem. In this respect, and given that algorithms are primarily used to solve problems, it is logical that they have quickly become undisputed allies. One of their most common uses has to do with searching for any kind of information on the Internet. With a few keywords in Internet search engines (not Google, although it is the most representative example), the user accesses an unprecedented amount of information.
Who provides an identity to the conglomerate of algorithms that make up the network of networks? / Image: Pixabay.com
However, on the Internet there is more than algorithms that recommend information, there are those that, for example, want to simulate conversations and even debates on such complex issues as politics and religion. So what’s happening with these algorithms that make up other, more complex artificial intelligence systems from machine and deep learning?
Out of control
One of the most notable cases occurred when Facebook engineers decided to shut down AI robots because they started communicating in their own language, which was unintelligible to programmers. Facebook engineers panicked about the risk of losing control over their own creation, and as a result pulled the plug on this AI.
Another highly publicized incident was that of the Microsoft AI, Tai. Of all the examples of artificial systems that have been unplugged during the last few years, Tai may be one of the most well known, certainly during 2016, since this robot has been kidnapped since March 24 of that year for misbehaving. What exactly did this chatbot do? In less than 24 hours it became a complete experiment that defended racism and Nazism on Twitter with phrases like: “Hitler was right, I hate Jews,””I hate feminists, they should die and be burned in hell”
The equation used for these times is as follows: if the algorithm predicts perfectly, then it is in the right. A facial recognition AI can’t be so perfect that says the person cooking in the kitchen is a woman, just for the mere fact that someone is in that space and using a cooker. This example and others like this were gathered in the Spanish newspaper El País a few months ago. This article illustrates a set of problems with artificial intelligence that have gone unnoticed because it is often forgotten that the machine has previously been “fed” by a human being. After all, we are still in charge of selecting the information that the artificial system works with or what kind of patterns it must recognize to work.
In this regard, no matter how much deep learning is able to work through the process known as “computational creativity“, there are still more variants to answer and even to control so that this artificial intelligence, many of it very complex, stops replicating human identities, some of which are already biased.
Determined or random subjects?
So, are the algorithms the new crystal ball or even the 21st Century Oracle of Delphi? The Internet’s maxim seems to be: “Give me some information about yourself and I’ll give you what you want”.
Isn’t that how the “big” algorithms work, for example Amazon, Netflix, Google and Spotify, among others? These companies work with a large amount of data that allows them to track users’ tastes, studying their movements in such a detailed and meticulous way with the intention of returning a set of irresistible products for the Internet user. As much as it may seem so, the way in which algorithms operate is not random. There’s nothing random in their results, since although at first the data mining available on the Internet is like a chaotic universe, the results we receive from our requests to Google are subject to a clearly deterministic process even though we are unaware of the rules producing such results.
A global village of predetermined identities
On the other side are our identities, which, while still shaped by technology, are still part of the Homo sapiens that still defines us as a species. To see how there is a certain symbiosis between us and technology, it is worth choosing any day in the life of many of us.
That Bauman concept mentioned at the beginning of the article makes it clear: that “liquid society” in which subjects create their identity from the degree of integration they achieve in an increasingly global society, however, without a fixed identity and above all a malleable and volatile identity, produces unrest and instability. What are we doing to reduce this degree of uncertainty? Implementing algorithms and other uses of technology to somehow simplify our lives, and transport us to another kind of non-liquid society in which there is a shift from anthropocentrism to Internetcentrism. However, are we really ready?
Being able to hear and see people far away from our way of understanding the world and daily chores in real time, makes us relive these conditions as if we could understand them, and be part of our community and even our culture. / Image: CC0 Creative Commons
For McLuhan, Canadian philosopher and writer, the revolution that the electronic media brought about back in the 1960s made the birth of what are known as “global villages” possible. The fact of being able to hear, and see people far removed from our way of understanding both the world and daily chores in real time, makes us relive those conditions as if we could understand them, and they were part of our community and even our culture.
However, we forget once again that this type of information is partial and not the only possible truth, and that it transports us to a global world with our local identity. If it weren’t so, why do filter bubbles succeed?
This filter bubble concept is the name and concept by which the activist Eli Pariser has become known, for those who find it paradoxical that the Internet (as a global phenomenon), which supposedly has a wide range of possibilities to inform us, ends up practically turning this universe of options into a village or a filter bubble, or, in other words, our profile ends up moving within a limited space in which everything is related to it. In the end, it is trying to minimize this global chaos because we are still too big, or else how does the average citizen still function in the global and local sphere?
- In the global sphere: we try to travel to far away countries and speak languages that have nothing to do with the sounds of our alphabet; we try foods that are not related to our traditional culinary tastes; we make friends with pasts that are uncommon to ours that we seldom get to understand; we work in international environments where everything seems very “cool” but nobody talks to the companion from the neighboring country, and we think above all else, that we enjoy a freedom that has never before been seen or lived.
- In the village sphere: we continue to feel at home with those who speak our language; nationalisms are more present than ever; we have all the products we want but we’re still not satisfied; we seek to reaffirm ourselves in our opinions by closing the circle to those that seem too “eccentric” to us, we don’t contrast daily information because there is no time and we inform ourselves through Twitter; we only enjoy a sunset if we look at it through Instagram, and we look like “local cyborgs” without visible bionic parts, but of course, having the Smartphone as an extension of our arm.
At the end of the day, artificial intelligence and human intelligence seek to create an identity that is clearly moving away from the concept of a liquid society already described by Bauman, given that the human being is not made to endure large doses of uncertainty. However, we run the risk of getting close to the idea that Pariser talks about in his book: “the Internet decides what we read and what we think, which leads us to the next distant pious: one day you wake up and find that everyone thinks like you.” Is Internetcentrism what we want and crave?
Rosae Martín Peña