Created by Materia for OpenMind Recommended by Materia
3
Start Will We Be Able to Control Artificial Intelligence?
16 May 2023

Will We Be Able to Control Artificial Intelligence?

Estimated reading time Time 3 to read

Mathematicians John von Neumann and Irving John Good discussed early notions of the technological singularity, the moment when technology would forever change human life as we know it. Other authors such as Vernor Vinge and Ray Kurzweil popularised the singularity associated with Artificial Intelligence (AI), the creation of a superintelligence that would drastically alter civilisation and potentially destroy it. If the day comes when this threat is realised, will we be able to keep AI under our control?

BBVA-OpenMind-Yanes-Podremos controlar la IA_1 Modelos de lenguaje como ChatGPT, o generadores de imágenes como DALL-E o Midjourney, han hecho saltar las alarmas al utilizarse con fines maliciosos o ilegítimos. Crédito: Pixsell / Alamy Stock
Language models such as ChatGPT, or image generators such as DALL-E or Midjourney, that have raised alarm bells when used for malicious or illegitimate purposes. Credit: Pixsell / Alamy Stock

From its tentative beginnings in the 1940s and 1950s, AI has grown to the point where it is now present in countless technological tools that we use every day in our mobile phones, computers and voice assistants. However, it is language models such as ChatGPT, or image generators such as DALL-E or Midjourney, that have raised alarm bells when used for malicious or illegitimate purposes. But experts disagree on when we might see so-called general or strong AI, the superintelligence capable of any task and which could bring about the singularity.

Many voices have warned of the risk of such futuristic systems causing a catastrophe of incalculable proportions. In 2014, physicist Stephen Hawking warned on the BBC that AI “could spell the end of the human race”. At the time, tech magnate Elon Musk called AI “our biggest existential threat”. The following year, the two joined more than 150 academics and experts in signing an open letter through the Future of Life Institute urging research into “how to reap its benefits while avoiding potential pitfalls”. 

The challenge of regulation

In the current race to create ever more powerful systems, in 2023 and through the same institution, more than 27,000 signatories have called for a moratorium of at least six months on the training of AI systems more powerful than GPT-4, the latest version of ChatGPT. The signatories urge AI labs and independent experts to “use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” They also call on developers to work with agencies and governments to implement safety regulations.

BBVA-OpenMind-Yanes-Podremos controlar la IA_2 Son muchas las voces que han alertado sobre el riesgo de que estos sistemas futuribles causen una catástrofe de proporciones incalculables. Crédito: Getty Images
Many voices have warned of the risk of such futuristic systems causing a catastrophe of incalculable proportions. Credit: Getty Images

Regulation is now the big workhorse for limiting the dangers of AI. Italy blocked the use of ChatGPT for weeks until the company OpenAI ensured compliance with privacy regulations. This is one of the areas of concern for the authorities that will be covered by the European law on AI, the first by a major regulatory body in the world, which will define different levels of risk in AI applications. The Global Partnership on Artificial Intelligence launched in 2020 at the initiative of the G7 currently has 29 member countries, and bodies such as the United Nations and the OECD are working on their own regulatory frameworks.

BBVA-OpenMind-Yanes-Podremos controlar la IA_3 La regulación es el gran caballo de batalla para limitar los peligros de la IA, de ahí iniciativas como la ley europea sobre IA o iniciativas como el Global Parnership on Artificial Intelligence. Crédito: John Lund/Getty Images
Regulation is now the big workhorse for limiting the dangers of AI, hence initiatives such as the European law on AI or initiatives such as the Global Partnership on Artificial Intelligence. Credi: John Lund/Getty Images

But given that companies have shown no intention of putting the brakes on their developments, will regulation be enough? In the movie War Games (1983), an AI computer threatened to launch nuclear missiles. The screenwriters explained that the machine could not just be unplugged because the missile silos would interpret the shutdown as the destruction of NORAD and that they should respond by launching. In more recent fiction, such as the latest instalments of the Terminator saga, the problem has been updated: it’s not one machine, it’s all of them—a network. We may have laws, but will the systems obey them? Will it be possible to instil in their programming Isaac Asimov’s famous laws of robotics, according to which a robot could never harm a human being? Science fiction comes to the real world, and this script has not yet been written.

Javier Yanes

@yanes68

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved