Created by Materia for OpenMind Recommended by Materia
4
Start From AI to Finance and Genetics: The Data Revolution
13 June 2019

From AI to Finance and Genetics: The Data Revolution

Estimated reading time Time 4 to read

Data is, unquestionably, the foundation of the world we will know in the future. Big Data, understood to be a set of data from which knowledge can be extracted, will allow humankind to know much more about itself and the surrounding world. It will give humans a different, limitless perspective, from a standpoint of data engineering. In this context, it isn’t difficult to understand what banking, genetic editing, and artificial intelligence (AI) have in common: data is their raw material.

Processing massive quantities of data and converting them into knowledge will facilitate the development of personalized financial services, more exact medicine, and machines that learn automatically. There is a common challenge across the three fields: the need to adapt the regulatory frameworks, policies, and ethics that underpin how we organize our society to this new ecosystem, specifically in terms of “data governance.”

New dichotomies: security, privacy, and personalization

Under this context and with the purpose of discussing the direction of scientific and technological advances of the last decade, Carlos Torres Vila, BBVA executive chairman; Ramón López de Mántaras, research professor at the Spanish National Research Council (CSIC) and director of the Artificial Intelligence Research Institute (IIIA); and Samuel Sternberg (associate professor in the Department of Molecular Biophysics and Biochemistry at Columbia University in New York) participated in a forum addressing the question at the heart of the most recent book in the OpenMind collection: Do we need a new Enlightenment in order to reframe our value system and legal structure to fit with the reality of our future?

Carlos Torres Vila emphasized the role of artificial intelligence in the transformation of international markets and the potential monopolistic risk that the development of these technologies could lead to, considering that “artificial intelligence has economies of scale; better predictions translate into more clients (and more data), which in turn implies better predictions.” However, data quality and the refinement of the very end products that we’ve developed are also determining factors. For the BBVA executive chairman, “it is clear that the best predictions will be those that have as complete a picture of us as possible, which implicitly represents a threat to our privacy.” Doctor López de Mántaras shares this view: we should not lose sight of the fact that “more protectionist societies may benefit their citizens from a privacy perspective, but they could create entry barriers blocking innovation compared to other more permissive communities,” said Torres. The first discussion point, therefore, is the dichotomy between privacy and personalization.

Are we willing to renounce our privacy in favor of made-to-measure products and services?

In the fields of genetics and biotechnology, this question has direct consequences on our health and the very survival of the human species. The genetic editing technology CRISPR in which Sternberg is one of the leading experts in the world, promises to revolutionize medicine and biology by way of “scissors” that enable genetic editing to alleviate diseases and pathologies at their roots. The personalization of treatment as a service also sparks a new debate: should personalized treatment be available to everyone in order to avoid a society further characterized by inequality as a result of potential genetic improvements.

In conclusion, in order to be able to exploit the full potential of data engineering in these three fields it is essential to develop “a holistic cybersecurity framework that guarantees trust as well as a properly functioning data economy and its application in different scientific fields,” according to Carlos Torres.

The myth and reality of present-day artificial intelligence

The end goal of AI – equipping a machine with a type of general intelligence similar to humans – is one of science’s most ambitious goals, but it is still outside our grasp. In the words of Ramón López de Mántaras, “AI is still in diapers.” To understand this statement, the difference between strong and weak AI has to be understood. Strong AI would imply that a properly designed computer would not simulate how a mind works, rather it “would be a mind” and, therefore, it would be capable of an intelligence equal, or even superior to, humans.  

On this point, López de Mántaras makes it clear that “absolutely every advancement made in the field of AI to date has been an example of weak and specific AI” (insofar as it is applied in a specific field and it is not a general mechanism.) Progress that has been made in specialized or specific artificial intelligence is indeed impressive, especially thanks to big data and high precision computing. However, general AI continues to be a pending challenge because its “common sense” in technological terms is still very far from being anything like a human’s in terms of how how general or deep it is.

For as intelligent as future artificial intelligence may become, it will never equal human intelligence.

The mental development that characterizes all complex intelligence depends on interactions with the environment, and these interactions in turn depend on the entity. That, together with the fact that machines don’t follow socialization and culturalization processes like humans do, accounts for machines’ notable difference in intelligence, removed from human values and needs, despite how sophisticated they might become. This should give us pause – and cause to think about the potential ethical limitations associated with AI development.

CRISPR-Cas: a technology to save the world?

Since 2001 and the successful sequencing of the genome, a lot of effort has gone into developing technology that can edit specific genes by making small but very precise modifications in human cells. Despite the spectacular advances, Sternberg maintains that we have to be patient because although today we can edit cells and change the genome, transferring this capability to complete therapies in live patients, changing our immune system, could still be some years away.

For the time being, this 2012 technology has “gone viral” because of its relative low cost and simple deployment. This widespread uptake has come with dangerous consequences. At the end of 2018, a Chinese researcher claimed to have used CRISPR to genetically modify two embryos and to have brought them to term, producing the first two genetically modified infants in history.  For cases like this, Sternberg advocates for the creation of a common regulatory framework produced as output of “an international debate in which governments and regulators reach consensus, instead of ending up with a scenario where each country has its own legislation.”  

The researcher also stresses the moral obligation of ensuring that the potential of CRISPR reaches every corner of the planet. “We must ensure that it is not only within the reach of those who have access to the technology, but that it is used to improve health in society at large.” Curing cancer, controlling mosquito populations that transmit diseases like malaria or Zika, eradicating genetic disease like HIV, improving the management of crops and agricultural products: these are only some of the applications that it is hoped CRISPR can provide, but there is still a long way to go, both in laboratories around the world, and also in regulatory hubs where policy is defined and legislation enacted.  

Redacción OpenMind

Related publications

Comments on this publication

Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved