With the possible exception of nanotechnology and biotechnology, no other technology is seen to hold as much promise to improve the lives of citizens as digital technology. These technologies are distinguished by their ubiquity and multiple aspirations for their use. Digital technologies are implicated in how we work, shop, learn and play and they have a vital role in empowering individuals and communities. The application of these technologies is expected to increase productivity and competitiveness, change education and cultural systems, stimulate social interchange and democratise institutions. Nevertheless, there are constant calls for reassessment of their governance when the promised benefits are accompanied by real or perceived threats to consumers and citizens. The spread of these technologies throughout society is challenging taken-for-granted assumptions about power, privilege and influence in society. It is urgent to assess whether these aspirations are being fulfilled because of the possibility that the claimed benefits will turn out to be empty promises or that they will be only crude approximations of profound transformations.
From social media driven by algorithms, to the Internet-of-Things (Amazon’s smart home Alexa), to tests of driverless cars, personal care robots, wearable computers, robot lawyers for personal justice, and scalper bots to purchase tickets or discount deals, for some, the digital world is creating new potentials for making the world a more equitable place. Over the past several decades, innovation in digital technologies has occurred at the intersections of established industry boundaries. Technological convergence is offering many novel ways of configuring digital components, but it is generally associated with market consolidation. Although some see convergence in the digital marketplace as a welcome development, others see it as reproducing power asymmetries in society. Convergent technologies and market consolidation appear to be leading to new structures of hierarchical control and inequalities that are enriching the welfare of the few at the expense of the many. These same developments are also seen by some as yielding anarchic wastelands, interspersed with walled online spaces, and admitting only those who submit to the authority of particular digital service providers. What if the providers of digital technologies and services are following a misguided pathway with negative consequences for all human beings?
Digital Innovation: Benefits and Problems
All these developments are influenced by policy and regulation as well as by the values designed into digital technologies. Because of their modularity, as these technologies evolve, they increasingly take on ‘system’ features. This gives rise to considerably greater unpredictability than in the past, which, in turn, makes it difficult to envisage the future benefits and problems associated with disruptive digital technological innovation.
On the benefits side, the algorithms that are increasingly driving digital services can yield information that helps to mitigate the damage caused by disasters, to protect people in public spaces, to signal health risks and to monitor climate change. The use of algorithm-supported services is enabling companies to boost their profits. New types of risk are commanding global public attention and innovative digital technologies and applications are expected to come to the rescue when, for instance, power grids fail, financial crises worsen, or information leaks occur. These services are also providing citizens with information that supports a politics of resistance to unfair policies and practices.
For some, the Digital World is creating new potentials for making the world a more equitable place.
On the problem side, the innovative business models devised by companies operating in the digital economy are enabling companies such as Amazon to sell products at discounted prices selectively to targeted customers, but this squeezes the margins of independent and hyper-bookstores. Digital content is rapidly becoming the advertising for paid-for services that aggregate, filter and integrate information that can be sold to a minority of discriminating customers who are willing and able to pay. Public media, including public service broadcasting, are being challenged as they face intense competition in the face of the digital platforms which aggregate content and function as gatekeepers. The combination of rapid innovation and asymmetrical power in the marketplace is disempowering various groups through technologically induced unemployment, the rise of criminality, the loss of privacy and, often, the curtailment of freedom of expression. Social and economic inequality is increasing within countries, even as digital connectivity divides are closing with the spread of mobile phones. Automated decision making systems are commonly used by banks, employers, schools, and the police and social care agencies. If they are poorly designed and untransparent, they can result in significant harm through discrimination and social marginalization. In Europe, the European Union’s General Data Protection Regulation (GDPR) may help to minimize negative effects by giving citizens a right to an explanation for decisions which rely on these systems, but the regulation has not been tested and the challenges of protecting adults and children’s fundamental rights in the digital age continue to grow in all regions of the world.
Adjusting to Digital Disruption
In the wake of all these developments, effort is being devoted to developing visions of equitable and welfare-enhancing information societies. In both wealthy and poor countries, some experts claim that investment in digital technologies is providing opportunities for lower and middle-income countries to leapfrog generations of technology. They are expected to catch up with, and even surpass, the wealthy countries in securing the benefits of digital technology for their societies. Although the Declaration of Principles agreed at the World Summit on the Information Society in 2003 emphasises a ‘common desire and commitment to build a peoplecentred, inclusive and development-oriented Information Society’ in line with the Charter of the United Nations and the Universal Declaration of Human Rights, a technology-centred approach predominates in the policy and trade literature and in many branches of the academy. Some experts do emphasise that there is no ‘one-size-fits-all’ model of a digitally mediated society, but a homogeneous model persists which downplays the social, cultural, political and economic factors that can lead to highly differentiated outcomes of digital investment. Even when visions of a transformative digitally inspired pathway to the future emerge from multistakeholder deliberation, the underlying assumption is that competitive markets will deliver it, despite the fact that digital service markets do not operate according to the assumptions of perfectly competitive market theory. The prevailing view is that innovation in the digital realm should be left to the marketplace with as little proactive policy intervention as possible.
An exception to this is in the digital skills domain. The skills gap is substantial and there is much debate about deskilling and up-skilling. The direction of digital innovation is affecting income distributions of populations by replacing humans with machines to accomplish growing numbers of tasks with varying forecasts of how severe the threat to worker livelihoods is and how quickly job displacement will occur. Skilled workers in areas such as artificial intelligence (AI), data management, data quality control and data visualisation are in short supply. Research on digital divides often focuses on up-skilling in technical domains of expertise. Many countries are introducing strategies to boost skills in STEM subjects – science, technology, engineering and maths, including coding. These skills are needed for employment in data analytics, data driven science and the AI field, but inequality in the digital world cannot be addressed without also paying attention to other determinants of inequality and exclusion.
Inequalities exacerbated by the spread of digital technologies cannot be addressed mainly by increasing the numbers of computer scientists and graduates with specialized technical training. Citizens need to be able to manage information creatively. They need the ability to select information, to disregard irrelevant information and to interpret patterns in information; and these are not technical skills. This feature of the skills deficit is especially important in relation to media content production and consumption where ‘fake’ or ‘false’ news is a growing problem. Online hoaxes are being created for profit and to foment political disruption. Social media content of this kind misleads citizens, it is creating a culture of mistrust and confusion, and there are growing signs of inequality between those who trust the media and those who do not. In principle, anyone can set up a home page but discriminating Internet use depends upon a range of skills to engage in interactive communication, information dissemination and collection, as well as information interpretation. The failure to make significant progress in developing broadly based digital literacies means that people who lack appropriate skills are being progressively marginalized and excluded. They may be excluded by their inability to recognize the value or usefulness of digital services or because they do not realize how services can be used in socially or economically productive ways.
Digital illiteracy is a growing problem. There are tools for filtering and censoring information but when children and adults cannot discern the difference between an ad or ‘fake news’ and reliable news, the foundational assumptions of civic participation in the polity are challenged. In the United Kingdom, research shows that only 25 per cent of 8 to 11 year olds can understand the difference between an advertisement or a sponsored link and an ordinary post in social media. Some 33 per cent do not know how to tell the difference. Just less than 50 per cent of 12 to 15 year olds and only 6 in 10 adults could tell the difference.1 Researchers in the United States tested students across the country, also finding that relatively few could distinguish an ad from a news story or information from a political lobbying group. They concluded that ‘we worry that democracy is threatened by the ease at which disinformation about civic issues is allowed to spread and flourish’.2
Citizens need to be able to manage information creatively. They need the ability to select information and to interpret patterns in information; and these are not technical skills.
The burden of responsibility and the costs of engaging in technologically convergent, digitally mediated societies are falling increasingly upon individuals. Digital technologies and platforms are creating opportunities for direct and intermediated relationships between companies and customers (and between governments and citizens) and price comparisons can be done on a global basis. Competition among the biggest platform operators may be creating variety and choice for some, but the risk is that these developments are excluding the disadvantaged or encouraging their inclusion on terms that are less favourable than they are for the well off.
Next Generation Technologies and Futures
With advances in computational power and the spread of digital applications, for those who are included and do acquire skills appropriate to the digital world, their inclusion is problematic if it also results in a loss of their control over their lives. The aim in the AI field for many years has been to automate human intelligence. Contemporary examples of ‘intelligent’ technologies are the augmented soldier and the digitally enabled consumer. The commitment is to code algorithms that ‘reason’ about reliability and honesty. The automation of everyday life, in the form of the Internet-of-Things or of advanced robotics, is often depicted in the popular literature as signifying progress with the promise of a better life for all, and, ultimately, a reduction in social and economic inequality.
Existing means for governing innovation in the digital technology field are not well positioned to tackle fundamental questions about the kinds of information societies that are desirable, in contrast to those that might be possible. Discussions about a better future usually privilege expectations about the benefits of the existing digital technological innovation pathway. The potential economic value of achieving these expectations sooner, rather than later, means that policies to mitigate problems with the current pathway are introduced only with caution and after evidence of harm has been collected. Measures that might address social and economic inequality and the potential for the loss of human authority over advanced digital information processing systems are often seen as damaging to the pace of innovation and the market. Nevertheless, it is crucial to ask whether the pathway towards technological systems that transform the machine-human relationships are consistent with human flourishing in the sense that people should be able to engage in ‘a kind of living that is active’;3 a kind of living where human values such as altruism, solidarity and dignity are respected. If this kind of living is to be secured in the long term, the pathway towards algorithmic or calculative information societies with reduced human authority must be averted.
Manuel Castells noted the large gap between our ‘ technological overdevelopment’ and our ‘social underdevelopment’ in the late 1990s and this gap continues to widen.4 A narrowing of this gap requires consideration of alternative pathways for the future of digitally mediated societies, but contemporary debate focuses principally on how to ensure the public right to access information, freedom from undesirable surveillance and the protection of individual privacy using present technologies which are available in the market. In work aimed at unpacking the digital black box, research is focusing on the impact of advanced computational systems on social sorting and discrimination, on whether people who are active online are aware of these systems and their biases, and on whether those who operate the systems are accountable to a ‘higher authority’ when something goes wrong. However, with the development of the AI and its applications and its strong prospects as an economic growth industry, there is a fascination with the quantifiable, with data, and with evermore-accurate predictions of human and non-human behaviour. In industry, the goal is to provide assurances that, whatever the biases of computational systems and learning machines, research and development is conducted with the aim of keeping human beings safe, happy, and potentially, wealthier. The challenge for policy makers is determining, not only whether contemporary digital systems are exploitative or liberating, inclusive or exclusive, but also to assess whether innovation is moving along a pathway where technical systems will become the main drivers of societal outcomes and, increasingly, negate human agency.
Social Imaginaries and Counter-Worlds
Assessments of this kind require that we think beyond short term management strategies and business models to consider the ‘deeper normative notions and images’ that are widely held about how a given society is, and should be, organized. The way people make sense of the world in which they live, the values they privilege and their preferred pathways to the future are crucial determinants of long-term outcomes. As philosopher Charles Taylor demonstrates, these notions and images can be treated as social imaginaries that influence collective practices in a society. It is these imaginaries that give rise to the stories that people tell themselves about likely technological developments and their consequences.5Taylor notes that, historically, there have always been competing social imaginaries that make claims to the way authority and accountability should be constituted in society.
In the contemporary period, the dominant social imaginary about the role of digital systems in society privileges rapid innovation and the diffusion of technologies that exhibit some degree of ‘emotional’ intelligence. This imaginary encourages the processing and interpretation of larger and larger quantities of digital information and massive increases in the technological capacity to produce, process, distribute, and store information. In this imaginary, it is necessary to adjust to shocks to the social, economic, cultural and political order as a result of rapid technological innovation.6 The assumption is that a ‘higher authority’, for example, the state, the business sector or the customer, has control over the outcomes of the innovation process. This social imaginary underpins programmatic visions of scientific research, engineering and mathematics that focus on feedback systems and automation as control systems for both military and non-military digital applications. Efficient markets and individual choice are assumed to guide changes in the digital system. The interaction of the prevailing social imaginary with manifestations of the power of digital platform companies that operate in highly concentrated markets means that these companies play the role of intermediaries with the capacity to block or filter digital information and to process customer data. Their financial strength gives them a near monopoly, substantial decision making power and the ability to influence whether and how they are regulated. According to this imaginary, sometimes residual factors skew the trajectory of change in unexpected ways, but the possibility that technological progress might be harmful for human beings is not part of this imaginary.
In a less prominent, but still very influential social imaginary, the ‘higher authority’ is assumed to be the collaborative, non-hierarchical or heterarchical, authority organized by decentralized networks of actors. This imaginary is inspired by a commitment to open digital systems, open access to information, minimal restraints on freedom of expression and the preservation of privacy, but it is, nonetheless, also dependent on increasingly sophisticated computational systems and AI applications. The social imaginary that underpins this vision of the technological innovation pathway is generally assumed to favour commonsbased production, transparency, and the capacity for human authority in the digital world.
In both these social imaginaries, however, the imagined (or real) ‘higher authority’ is a human being and, in this regard, the pathway of technological innovation is not, or only very rarely, questioned. In the first social imaginary, the emergent properties of a complex digital system are expected to yield positive outcomes for individuals mediated by market, and occasionally, by government intervention. In the second, it is the generative activities of technology designers and online participants that are expected to achieve these outcomes. In both cases, the role of the engaged digital world participant is to search, tag and review data, with a ‘higher authority’, the corporate, state, and/or civil society governance mechanisms, being charged with taking actions based on the results of data analysis. It is assumed that these decisions will align with interests in commercial gain and with fairness and justice, at least, in the long term. Both the dominant and the subordinate social imaginaries are of a digital environment that augments human–machine, machine–machine and human–human relationships, in each case, with manageable risks to human beings.
Shaping the Digital Pathway
Extending the range of futures that can be imagined requires a proactive agenda with a view to guiding the digital technology innovation pathway. The prevailing social imaginaries, which assume a ‘natural’ trajectory of innovation, are being called into question. For instance, Luc Soete asks ‘could it be that innovation is not always good for you?’7 He suggests that rather than a mainly beneficial process of Schumpeterian creative destruction relying on a continuous process of technological innovation, in the contemporary period, we are witnessing a period of ‘destructive creation’.
Governance arrangements influence the kinds of societies that emerge; they inform perspectives on the fundamentals of life, the quality of life that people ought to be entitled to, and whether, by any measure, societies are inclusive, respectful, and enabling for all. Algorithms can be understood to govern because they structure understandings of future possibilities. When the results they produce are treated as if they are certain, the capacity to think about alternative worlds and technological development pathways is discouraged. When we rely principally on the dominant social imaginary, individuals are assumed to be subject to the choices of the large digital platform operators or the state. When we rely on the second social imaginary and the generative power of globally distributed online communities, there is no guarantee that outcomes will be equitable or harmless because online movements are not always benign. In both instances, it has been assumed until fairly recently that humans are the ‘higher authority’ and that they are retaining control of the computational systems.
In the digital environment of today, for the most part, when the economy or polity is understood through the algorithmic lens of visualizations of risk maps and scores and flags, it is still usually a human being or a group of individuals who take a decision to act. Asymmetrical power distributions mean that those with the power to act are more often than not the military, another branch of the state, or large companies, but they also can include online activist groups organized through social protest movements. Together with technology designers who embed values in the digital system, these actors are making choices about the pathway towards our future digital environment. Yet, if the quantification of everything means that societies are at risk of becoming humanly ungovernable, then the notion that the quantification of life, enabled by sophisticated AI systems and applications, is synonymous with the best interests of human beings needs to be reassessed. Alternative outcomes may be possible, but only if a different social imaginary starts to become prominent and to shape choices about equity and justice in the present, and also about whether the ‘higher authority’ should continue to be human beings.
The digital technological innovation pathway historically was not achieved in a linear way although it is sometimes assumed to have been a ‘natural’ progression from analogue to digital, from segmented industries specialized in telecommunications or computing, to converged industries and technologies which comprise today’s digital platforms and services. Organized in untransparent webs of modular technologies and complex hierarchical, horizontal and diagonal linkages, digital applications, including robots, supported by algorithms and machine learning, are expected by industry and government leaders to raise income levels. Some civil society activists expect these applications to underpin successful protest movements. But again, their aspirations are predicated on movement along a singular technological innovation pathway.
It has been assumed until fairly recently that humans are the “higher Authority” and that they are retaining control of the computational systems.
The view that digital technologies will offer solutions to societal problems is a common theme. When the focus is on the diffusion of innovations and on the competitive dynamics of digital platforms and services, the second-and further-order effects that give rise to uncertainty and outcomes that cannot be anticipated or easily modelled makes it seem as if the only alternative is to exploit the technological pathway that appears most likely to lead to economic gain in the short and medium term. Innovation and creative destruction historically have been the features of the economy that generate economic growth, productivity gains, and improved social welfare. This leads to a ‘wait and see’ attitude whereby human actors adjust to disruptive technological change in the short term. The market, the state and/or civil society are expected to deliver ameliorative responses to disruption and the after effect of technological change is assumed to be positive. Any necessary adjustments are seen as being largely spontaneous leading to claims that ‘we know that gains in productivity and efficiency, new services and jobs, . . . are all on the horizon’.8 The dominant policy orientation is toward stimulating economic competitiveness based on the premise that, if a country does not achieve a leadership position in emerging fields of technological innovation such as machine learning and AI, another country will. The consequence of this is an overwhelming emphasis on ex post policies that aim at influencing company strategies after technologies have reached the market.
Some government, industry and civil society actors are starting to acknowledge that the fourth industrial revolution ‘will fundamentally alter the way we live, work, and relate to one another’ in the wake of the ubiquity of digital sensors, AI and machine learning, and the way they are being combined with the physical and biological world.9 Change is happening at great, even transformative, speed and citizens, public officials, and business leaders find it difficult to understand advanced digital computational systems. In the short and medium term, digital technology applications do have a great potential to tackle social and economic inequality and global socio-technical challenges and it is entirely reasonable for actors to seek to maximize the benefits of technological innovation. But the prevailing social imaginaries are persistent and they foster the view that connecting the unconnected to achieve inclusive information societies, combined with enhancing technical digital literacies and marginal interventions in the market to respond to threats, are sufficient responses. Even in the context of the two most prominent social imaginaries, adults and children need a wide range of digital literacies if they are to learn to navigate effectively in the digital environment.
In addition, if the long term technological trajectory is towards a digital world that is incompatible with maintaining the rights and freedoms that many countries value, including accountable democracy, it is essential to promote debate about counter-worlds or alternative pathways and the changes that would be needed to achieve them. Insofar as the more digitally enabled benefits there are, the fewer opportunities there are for human beings to exercise control and authority in their lives, it is essential to challenge the view that adapting to whatever is produced in the laboratory is the only option. The prevailing social imaginaries make it difficult to conceive of alternative digital technological pathways, but it is not impossible. Advances in AI and machine learning applications are triggering consultations on ethical frameworks that discuss matters of human dignity, freedoms, equality, solidarity, and justice. In these forums questions are consistently being raised about how to ensure that digital technology systems will not be harmful to humans, but the policy stance remains still strongly oriented to ex post intervention.
Policies are needed to improve skills, address market failures, limit harms and reduce inequality, but the larger issues raised by the encroachment of AI and machine learning should not be left to business and the market, to the state or to civil society actors, on their own. The greatest need is to secure a robust multistakeholder dialogue that enables a consideration of the ‘deeper normative notions and images’ which sustain the widespread belief that the overall direction of technological change is, in fact, consistent with human autonomy and flourishing. It is important to recognise that the direction of technological change was not inevitable historically and it is not inevitable now. The discourse around technological inevitability and adaptation to secure industrial economic competitiveness is deeply entrenched as is the view that civil society, without the aid of formal institutions, can be depended upon to generate outcomes consistent with achieving equity. If ‘destructive creation’ is indeed the likely outcome of the digital technological innovation pathway, then action is needed before it is too late and there is no opportunity to turn back because human autonomy has been compromised. It is necessary to reveal the norms and power dynamics of ‘governance by social media’ or ‘governance by infrastructure’,10 in the current period, but this needs to be coupled with much greater attention to actively fostering social imaginaries consistent with human beings remaining in an authoritative and accountable position in relation to technology.
The immersion of all the human actors as stakeholders in what Chantal Mouffe regards as forums that allow for agonistic confrontation11 is one way to stimulate the necessary discussions. In confrontations of this kind, alternative, and often oppositional, social imaginaries could be debated. The goals and values that should govern choices about technological innovation pathways could be assessed in this way. As long as the principal assumption underpinning the social imaginaries is that the optimal organisation of societies is through greater untransparent computational complexity, this view will continue to be internalised, limiting the capacity of all actors to imagine alternatives. A dialogue, even if adversarial, is needed about what human beings will do in their lives in the future and about how, by whom or by what ‘higher authority’ people’s life chances will be established.
Debate and the ensuing controversy about human authority in the digitally mediated world is likely to produce one or more new hegemonic social imaginaries; potentially including imaginaries which could inculcate values and lead to decisions that encourage a future in which human beings retain authority and in which social and economic inequalities and harms are addressed more effectively than they are today. This is likely to require greater proactive or ex ante intervention in the market than is countenanced by those informed mainly by one or other of the two social imaginaries discussed earlier. As Raymond Williams put it, ‘once the inevitabilities are challenged, we begin gathering our resources for a journey of hope’.12 The digital world, ultimately, may be constructed in a way that favours equity and inclusiveness, but also, and crucially, in a way that values human beings retaining mastery over their destiny.
1 Sonia Livingstone, Kjartan Ólafsson, and George Maier (2017) ‘If Children Don’t Know an Ad from Information, How Can They Grasp How Companies Use Their Personal Data?’, 18 July, LSE Media Policy Project blog,http://tinyurl.com/ya26w9so .
2 Stanford History Education Group (2016) ‘Evaluating Information: The Cornerstone of Civic Online Reasoning’, report with support of the Robert R. McCormick Foundation, 21 Nov.,http://tinyurl.com/h3zneuz .
3 Martha C. Nussbaum (2012) ‘Who is the Happy Warrior? Philosophy, Happiness Research, and Public Policy’. International Review of Economics 59(4): 335-361, p. 342.
4 Manuel Castells (1998) The Information Age: Economy, Society and Culture Volume III: End of Millennium. Oxford: Blackwell, p. 359.
5 Charles Taylor (2004) Modern Social Imaginaries. Durham, NC: Duke University Press.
6 Robin Mansell (2012) Imagining the Internet: Communication, Innovation and Governance. Oxford: Oxford University Press.
7 Luc Soete (In Press) ‘SPRU’s Impact on Science, Technology and Innovation’, Research Policy.
8 House of Commons (2016) Robotics and Artificial Intelligence. London: House of Commons Science and Technology Committee, Fifth Report of Session 2016-17, para 36.
9 Klaus Schwab (2106) ‘The Fourth Industrial Revolution: What it Means, and How to Respond’. World Economic Forum, 14 Jan, para 1,http://tinyurl.com/hlah7ot , and (2017) The Fourth Industrial Revolution, London: Portfolio Penguin. For Schwab the 4th industrial revolution follows the 1st, 1760-1840 (railroads and the steam engine), the 2nd in the late 19th and early 20th centuries (mechanical production), and the 3rd, from the 1960s to around 2010 (computerization). There are different periodisations in the literature. For Chris Freeman and Francisco Louça in As Time Goes By: From Industrial Revolutions to the Information Revolution. Oxford: Oxford University Press, 2001, digital technologies constitute the 5th techno-economic revolution.For Erik Brynjolfsson and Andrew McAfee in The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Co, 2014, the contemporary period is the second machine age.
10 Laura DeNardis and Andrea M. Hackl (2015)‘Internet Governance by Social Media Platforms’. Telecommunications Policy, 39: 761-770 and Laura DeNardis and Francesca Musiani (2016) ‘Governance by Infrastructure’, in Francesca Musiani, Derrick L. Cogburn, Laura DeNardis and Nanette S. Levinson (eds) The Turn to Infrastructure in Internet Governance (pp. 3-21). New York: Springer Link.
11 Chantal Mouffe (2013) Agonistics: Thinking the World Politically. London: Verso Books.
12 Raymond Williams (1983) Towards 2000. London: The Hogarth Press, p. 268.