Our society has changed dramatically thanks to, among other factors, the emergence of consumer electronics and the omnipresence of computers, due to the constant improvement of their performance and the drop in price. This has been possible because, for the last 40 years, the capacity of integrated circuits has doubled every two years, according to an empirical rule known as Moore’s Law. However, several factors have conspired to make this rule begin to stop being valid. What economic and social consequences will this fact have?
Moore’s Law has been a guide that has marked technological development since its formulation, but especially since 1991. The different industries involved in Europe, USA, Japan, Taiwan and Korea organized themselves that year to ensure coordination in the complex process of producing integrated circuits or chips. Its objectives included avoiding dysfunctions in supply and for the development of processors to follow Moore’s Law, so that they could contain a greater number of transistors. That is, this rule has been self-imposed by the industry. Consequently, it has gone from little more than two thousand transistors in the INTEL 4004 chip in 1971 to more than 1.7 billion in the skylake, which has been in use since 2015. These changes have caused the size and price of computers to decline, while computing power has skyrocketed, promoting the advent of consumer electronics and a change to many social customs.
Is this the end of Moore’s Law?
For some years this technological process has been encountering the first barriers. First of all there are physical factors: the amount of heat generated by the circuits; the size of the connections, which could be limited to just 10 atoms; or quantum effects, since the latest-generation transistors have a size of about 100 atoms and as they reduce, the change in state between zeros and ones, the basic units of calculation, becomes complicated. Furthermore, the economic factor is due to the development cost: a line to manufacture a new generation of chips costs about $7 billion and it is estimated that this could reach $16 billion over the next decade. An investment that few manufacturers can afford, especially if they are not guaranteed a return on sales to maintain profits.
Several solutions have been proposed to counter these problems, some of which have already been implemented. First, the speed of the internal clocks was limited, in order to produce less heat, as shown in the attached diagram. Simultaneously, several cores were produced. In principle, it is possible to create integrated circuits with up to 1,000 cores, but the distribution of tasks for them to calculate simultaneously becomes increasingly complicated. In any case, there are tasks that cannot adapt to this type of strategy. But in the future more innovative solutions may appear: from quantum computing, which has a great potential but its own limitations (the Spanish researcher Juan Ignacio Cirac received the 2006 Prince of Asturias Award for his work in this field) to neuromorphic engineering, inspired by the brain and its neuronal structure. Changing silicon for materials such as graphene or moving from two-dimensional sheets to 3D structures is also being considered. Or something even more innovative, stop moving electron currents and using one of their basic properties, their spin, which could be defined as the way they turn. In any case, these strategies involve a radical change in the way of designing and producing memories and processors.
The change is mainly driven by large buyers, companies like Amazon and Google, which gives them a great ability to influence the evolution of the entire process, and the development of new applications for users: smartphones, tablets, and games based on consumer electronics. However, it also influences the development of so-called cloud computing, where resources and data are not physically in the consumer device. Another essential aspect is product specialization, such as integrated circuits for graphics, compared to the current generic chips, which are very versatile and manufactured by hundreds of millions. However, development costs, excluding manufacturing, can far exceed tens of millions of dollars, which presents an additional problem: a large investment to create low-demand processors. Other key aspects are interoperability between different elements and the need to reduce energy consumption, going to the extremes of “drinking” environmental energy. As a result, in the future there will be billions of elements communicating with each other without human intervention. The first examples are already here: a washing machine that starts by itself or food that is heated in the microwave oven upon knowing that we are on our way home thanks to our cell phone or the car’s GPS.
What are the implications for us?
We do not conceive life without electronics around and the constant improvement of the capabilities that the various gadgets offer us. The social impact of new technologies has been immense and the change has occurred in a few years. Nevertheless, this technology “fluidity” could slow down, as one of its main drivers, Moore’s Law, will soon stop being followed. Only new solutions that are valid from a technological and economical standpoint can make the development that we have become used to in recent decades continue.
If this is the case, the new habits bas
ed on a highly-technological society and an electronic meta-world, without direct human intervention, will be the new guides for technological development. Nevertheless, there will be a price to pay: in the process, we will lose control of the computing capacity and, what is worse, our personal data.
David Barrado Navascués
European Space Astronomy Center (ESAC, Madrid)