Imagine a council of ministers gathered to assess how to deal with the outbreak of a conflict between two ethnic groups in their country. Could tools such as artificial intelligence be used to predict what impact the proposed measures might have and thus facilitate the decision on whether to adopt them? Questions like these are emerging among social science experts, attracted by the possibility of harnessing emerging technologies to develop predictive models and simulations capable of supporting policymakers and administrators. There are some more optimistic voices that see a great future for this technology on issues such as forecasting migrant flows or containing epidemics, but there are also more sceptical opinions. The issue is being hotly debated: how far can this technology really go?
Increasingly powerful computational capacity now offers the possibility of creating simulations of complex social phenomena. Of particular interest is the role that can be played by Multi-Agent Artificial Intelligence (MAAI), a system that allows the artificial simulation of situations where different actors interact simultaneously, forming a virtual society reminiscent of the video game The Sims.
One example is the artificial model that an international research team created to evaluate with digital experiments the potential impacts that the arrival of a group of refugees could have on a Western European country. But there may be others. “We’ve made models that attempt to predict obesity rates in the United States or the likelihood of violent conflicts between religions, to name a few,” Saikou Diallo of Old Dominion University in Virginia (USA), one of the researchers involved in these studies, told OpenMind. These simulations can also be used to analyse subjects such as the integration of autonomous vehicles in cities, he adds.
Diallo says the added value of MAAI is “the ability to compile detailed policy questions and measures and to understand the nuances involved in the study of social systems.” The result is that “totally unpredictable behaviour” can be observed and “very innovative” simulated solutions can be tried out, since in the virtual environment there is no need to take into account the moral and ethical constraints of the real world, he adds.
As an example of an experimental measure in this context, he gives the example of insurance companies offering workers vouchers to encourage physical exercise, thereby counteracting the problem of obesity. In short, the researcher says, the goal of these types of models is “to support the policy so that it has a daily, long-term impact.”
Why did Boris Johnson win?
Flaminio Squazzoni, a professor at the University of Milan (Italy), suggests that another topic of interest is to study “the polarisation of collective opinions” towards the extremes, as happened with Brexit and other cases of political current affairs whose dynamics torment international observers. For the moment, he assures, the models that allow these “virtual laboratories” to be created remain at a “theoretical” level, useful for understanding basic mechanisms. Why did Boris Johnson sweep the latest elections in the United Kingdom? Calibrating the simulations against real-life situations is key to answering such a question, adds Squazzoni.
The potential of these computational systems of sociology draws the attention of politicians and administrators. “Testing policies in a scenario simulator can change the way decisions are made,” says Squazzoni. In Diallo’s opinion, multiagent models allow users to “design decisions” that, within the simulation, “will change history.” But, he adds, one must remember that simulations “live in their own time and space, so it does not directly imply that the things that happen in the simulation will happen in the real world.”
Derek Groen of Brunel University London stresses the limitations of these techniques. According to him, systems based on Artificial Intelligence or Big Data can present problems when the available data are biased, or if an analysed phenomenon contains aspects subject to uncertainty. Furthermore, he adds, obtaining a reliable simulation through the combination of these technologies may in certain cases require excessive validation work to make it worthwhile. “Simulating all aspects of a city or the world with MAAI is a ridiculous idea,” he says.
Groen has already expressed scepticism about the possibility of using artificial intelligence to predict the destination of migrants forced to move by the impact of climate change, a topic that interests some institutions and tech companies. He wrote an article in The Conversation in which he asserted that the factors to be taken into account to refine the forecast are too numerous to confidently predict with technology where the refugees will go. The lack of background information in the dynamics of certain migration flows, as well as political variables, can alter the predictions.
When data intersect with ethics
For his part, Squazzoni invites us not to confuse computer simulation models with predictive systems based on Big Data. The former, he says, are built on the basis of the experimental method and allow hypotheses to be formulated and validated in a digital environment, with the aim of “understanding the background mechanisms” of social phenomena rather than predicting them. The latter, he contrasts, are shaped by data collected not specifically in a pre-determined experimental work, but from varied sources and gathered for other purposes.
Therefore, in his opinion, predictive systems based on Big Data can help to anticipate the evolution of trends, but not to analyse the root causes of these phenomena. Thus, it is not an increase in the volume of data collected that can allow us to predict phenomena like Brexit, but rather to understand in depth what has generated them, Squazzoni asserts.
As is often the case when talking about new technologies, there are also ethical objections. Diallo says that virtual simulation models can pose “very dangerous” scenarios due to the lack of moral restrictions in the digital environments in which they are developed. Groen warns that if a predictive tool is restricted, it may be used by a particular “organization or faction” to take advantage of groups of people who do not know how to use it. An example, he adds, might be if there were a tool to accurately predict the likelihood that a particular city would be attacked on a specific day during a war.