Created by Materia for OpenMind Recommended by Materia
4
Start The (5) Commandments of Artificial Intelligence
26 June 2018

The (5) Commandments of Artificial Intelligence

Estimated reading time Time 4 to read

The Cambridge Analytica scandal—the political data analysis company that obtained unlicensed access to the personal data of 87 million Facebook users to help Donald Trump’s 2016 presidential campaign—not only submerged Mark Zuckerberg’s company in a global political storm, it also reopened the debate on the need to regulate the use of Artificial Intelligence (AI). Technology gurus such as Elon Musk—CEO of Tesla and SpaceX, who has claimed that the development of AI “is much more dangerous than nuclear warheads“—have already declared themselves in favour of creating some kind of regulation. Now, politicians and academic researchers are insisting on the same idea.

Cambridge Analytica obtained unlicensed access to the personal data of 87 million Facebook users. Credit:  TheDigitalArtist

In the United Kingdom, where Cambridge Analytica was born, the House of Lords mobilized to lead the way, with the aim of preventing other companies from establishing precedents for the dangerous and unethical use of technology. Last May the institution published the report AI in the UK: ready, willing and able?, which provided five ethical principles that should be applied in all sectors at the national and international level: the AI ​​should be developed for the common good and benefit of humanity; it should operate on principles of intelligibility (technical transparency and explanation of its operation) and fairness; it should not be used to diminish the data rights or privacy of individuals, families or communities; all citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI; and the AI ​​should never be given the autonomous power to hurt, destroy or deceive human beings.

A large-scale regulation for robots

In February 2017, the European Parliament became the first institution to propose large-scale regulations on AI, with six basic assumptions, especially for robotics: every AI should have an emergency off switch so as not to represent a danger; the technology should not be able to harm a human; no emotional links should be created with it; robots will have rights and obligations as “electronic persons”; those of larger size must have compulsory insurance; and all AI will pay taxes.

For Timothy Francis Clement-Jones, one of those responsible for the British report, initiatives like this one and the fact that the US Congress pressured Zuckerberg about the massive theft of personal data, show that “the political climate in the West is more conducive to seeking a public response” to the security problems posed by technology. “Our goal is not to write the principles directly into legislation, but rather to have them as a guiding beacon for AI regulation,” he explains. “For instance, in the financial services area it would be the Financial Conduct Authority that would be looking at how insurance companies use algorithms to assess your premiums or how banks assess people for mortgages etcetera.”

The European Parliament have proposed a regulation with six basic assumptions, especially for robotics. Credit: Cheryl Ng Cheryl Ng

Another concern of the British is the creation of data monopolies, i.e. large multinational companies (the report names Facebook and Google) with such control over the collection of data that they can build better AIs than any other entity, increasing their control over the sources of data and creating a vicious circle in which smaller companies and nations cannot compete. “Basically, we want there to be an open market in AI,” says Clement-Jones. “We do not want to have just five or six major AI systems that you have to belong to in order to survive in the modern world.”

Implications for the development of AI

The case of Cambridge Analytica has shown how the three great players in the world of technology— the US, China and Europe—balance the demands of consumer privacy and security by governments, while working towards maximising access to Big Data in order to master AI. “All of these governments are trying to figure out what data governance should be like and that will have implications for research and development in AI,” says Samm Sacks, a researcher in the technology program at the Center for Strategic and International Studies in Washington. (USA).

El análisis de datos que elimina la identidad de las personas es un área en la que se está avanzando. Crédito: TheDigitalArtist: TheDigitalArtist

“What happened with Facebook sounded the alarm to Chinese regulators and data controllers,” adds Lu Chuanying, a cybersecurity researcher at the Institute of International Studies in Shanghai, China. “If there was a problem for the largest social networking platform in the US, it could also be a problem for the Chinese companies.” Chuanying helped draft the new data policy for the Asian country, which came into force in May and whose restrictive level, he says, ranks between the US and the European Union, due to competitive concerns over Big Data.

While Europe aims to lead the ethical use of data, China is striving to take away from the United States the global leadership in the development of AI: a recent study by the Future of Humanity Institute reveals that China currently outperforms the US in AI capacity, although not in access to Big Data.

In the complicated equation to regulate technology without slowing down its development, Paula Parpart, founder of the company Brainpool AI, notes a light at the end of the tunnel. The key, according to her, is the so-called differential privacy: the analysis of data that eliminates the identity of people, an area in which progress is being made. “Another thing is that what people generally call AI is actually machine learning, which uses the brute force of big data to perform tasks,” adds Parpart. “The real AI requires algorithms that, like humans, can learn from one or two examples, instead of thousands,” she clarifies. Greater regulation of privacy could force research in this direction.

Greater regulation of privacy would require reducing the use of big data in research and the development of new products, which could result in an increase in research and in the resources allocated to what Parpart calls the “real AI”: systems capable of obtaining more performance with less data, so that they do not violate the rights of citizens.

Joana Oliveira

@joanaoliv

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved