Created by Materia for OpenMind Recommended by Materia
42
Start The Past is Prologue: The Future and the History of Science
Article from the book There’s a Future: Visions for a Better World

The Past is Prologue: The Future and the History of Science

Estimated reading time Time 42 to read
Given the importance science has in our lives and societies, it is not small question to consider whether it is possible to predict the future of this discipline. Using the history of science as his main tool, Sanchez Ron analyses predictions made by a number of scientist about the nature of science and its future paths. Providing a wealth of examples from those who have practices the soothsayers´ art, the author chooses cases that missed the mark entirely, including predictions in mathematics (Hilbert), The theory of evolution (Erasmus and Charles Darwin) and artificial intelligence (Wiener, Von Neumann and Turing). The author also explores the relationship between science and technology, as well as addressing issues such as how social needs or science fiction affects predictions of the future of science.

“Where is what´s past is prologue, what to come

in yours and my discharge”

William Shakespeare, The Tempest 1

We live on the borderline between the past and the future, with the present constantly eluding us like a shadow that fades away. The past gives us memories and knowledge – confirmed or awaiting confirmation: a priceless treasure that shows us the way forward. However, we really do not know where that road will lead us in the future, what new features will appear in it, or whether or not it will be easily passable. Of course, these comments can be obviously and immediately applied to life, to individual and collective biographies: think, for example, of how some civilisations replaced others over the course of history, to the surprise — in most cases — of those who found themselves cornered by the passage of time. Yet they also apply to science, the human activity with the greatest capacity for making the future very different from the past.

Precisely because of the importance of the future to our lives and societies, an issue that has repeatedly emerged is whether it is possible to predict the future, doing so based on a solid knowledge of what the past and the present offer us. If it is important to understand where historical and individual events are headed, it is even more important to do so for the scientific future. Some may wonder, “Why is it more important? Are the life, lives and stories — present and future — of individuals and societies not truly essential? Should we not be interested in these beyond all other considerations?” Admittedly so, yet it still must be stressed that scientific knowledge is a central — irrevocably central — element in the future of these individuals and societies, and, in fact, in the future of humanity.

The (deterministic) newtonian dream

From this perspective, it is clearly important to be able to predict the future of science. Actually, prediction is the ultimate purpose of science, which seeks to determine the future evolution of phenomena that occur in nature. In my opinion, science is but the development of logical systems with predictive capabilities. The primitive and erroneous science of astrology sought to explain what happens on Earth – including the future lives of individuals – based on the location and movements of the planets. Later, with the publication of the powerful physics of motion developed by Isaac Newton (1642–1727) in his great 1687 work Philosophiae Naturalis Principia Mathematica, it was thought that the determinism underlying the basic equations of Newtonian dynamics would make it possible to determine the evolution of any movement if the baseline data were known. The most categorical and famous statement on this point is the one made by Pierre-Simon Laplace (1749–1827) in one of his books, Essai philosophique sur les probabilités (1814):

“An intelligence that could at a given moment comprehend all the forces by which nature is animated and the respective situation of the beings who compose it, if it were sufficiently vast enough so as to submit this data to analysis, would encompass in one single formula the movements of the greatest bodies of the universe and those of the lightest atom; for this intelligence, nothing would be uncertain and both the future, as well the past, would be present before its eyes.”

Of course, Laplace knew very well that it would not be possible to have all the necessary information, or the capacity of calculation, that would permit fulfilling the deterministic Newtonian programme (“All these efforts to seek the truth tend to always lead it back to the intelligence that we have just imagined, but from which it will always remain infinitely distant”), and it was precisely because of this that he took up the theory of probability.

This Newtonian dream was crushed by quantum mechanics (developed by Werner Heisenberg in 1925 and Erwin Schrödinger in 1926) with the intrinsic probabilism of its basic variable; wave function (discovered by Max Born in 1926) and the uncertainty principle (revealed by Heisenberg in 1927); as well as by chaos (Edward Lorenz 1993) with the dependence of the solutions of chaotic systems on small changes in initial conditions.

Interesting and fundamental as these considerations are, they are not the subject of the topics addressed in this book, nor are they what I wish to analyse here. The future of science I plan to address is the future in the forecasts that scientists have made about the future contents and directions of science.

Imaging the future

Let us start by observing that it is not hard to find people who, in the past, made predictions about the scientific future. The English clergyman Francis Godwin (1562–1633) wrote prophetically in a book published posthumously (Godwin 1638), “You shal then see men to flie from place to place in the ayre; you shall be able, (without moving or travailing of any creature,) to send messages in an instant many Miles off, and receive answer againe immediately; you shal bee able to declare your minde presently unto your friend, being in some private and remote place of a populous Citie.” Then Christopher Wren (1632–1723), though famous as an architect, was firstly an eminent astronomer. In his 1657 keynote address as the new Professor of Astronomy at Gresham College in Oxford, he predicted that the “Time would come, when Men […] should be able to discover Two thousand Times as many Stars as we can; and find the Galaxy to be Myriads of them; and every nebulous Star appearing as if it were the Firmament of some other World, at an incomprehensible Distance, bury’d in the vast Abyss of inter-mundious Vacuum: That they should see the Planets like our Earth, unequally spotted with Hills and Vales: that they should see Saturn […] changing more admirably than our Moon” (Wren 1750, 200–206).

Both Godwin and Wren were correct, (although Godwin was not correct about thought transmission, or at least, not so far), though it would be some time before their predictions came true. However, these kinds of predictions are not what interest me here, since they are easy to imagine, having been made many times in the past. Predictions about space — often better described as precursory musings or the stuff of science fiction — have been common throughout history, especially in terms of flying to the Moon. Lucian of Samosata (c. AD 125–195) imagined a trip to the Moon and the Sun on a flying boat propelled by nothing but “whirlwinds” (García Gual 2005). Even one of the protagonists of the Scientific Revolution, Johannes Kepler (1571–1630), who devised three laws of planetary motion that bear his name, dreamt up a trip to the moon. He was transported with the help of lunar demons, although his true purpose was to describe what an observer on our planet would see from it. In this respect, Kepler’s dream (Somnium, published posthumously in 1634) was more in line with the best scientific standards than the possibilities imagined by Francis Godwin or Lucian of Samosata, or even those made by the Perpetual Secretary of the Académie des Sciences in Paris, Bernard le Bovier de Fontenelle (1657–1757) in his book Entretiens sur la pluralité des mondes, published in 1686, in which he considered the possibility of extra-terrestrial life on other planetary worlds.

More interesting than these kinds of predictions are others made in the nineteenth century, while the theory — already complete, as we shall see — was still held that all physical phenomena would one day be explained through the pillars of Newtonian physics. Newtonian physics had yet to be applied to the other great natural forces known at the time: magnetism and electricity. Initial success was seen in laws such that proposed in 1785 by French physicist Charles-Augustin de Coulomb (1736–1806), which extrapolated the law of universal gravitation to the domain of electricity (or better, electrostatics). His law asserted that the force between two charges is proportional to the product of their values divided by the square of the distance between them. However, electromagnetic phenomena could only be explained by going beyond the Newtonian model and using a different theoretical framework: electrodynamics, developed in the 1860s by the Scottish physicist James Clerk Maxwell (1831–1879), as centred around the fields as opposed to the Newtonian action at a distance.

Prediction is the ultimate purpose of science, which seeks to determine the future evolution of phenomena that occur in nature. In my opinion, science is but the development of logical systems with predictive capabilities.

Unexpected futures

With the Maxwellian synthesis completed in the late nineteenth century, physicists increasingly believed that with Newtonian dynamics and Maxwell’s electrodynamics, the theoretical basis for describing nature was indeed complete. Thus, the outstanding American physicist Albert Abraham Michelson (1852–1931; he became the first American to receive the Nobel Prize for Physics in 1907) apparently made the following remarks during a speech on 2 July 1894 at the inauguration of the Ryerson Physical Laboratory at the University of Chicago:1 “It seems probable that most of the grand underlying principles have now been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles […]. The future truths of physical science are to be looked for in the sixth place of decimals.”

In 1895 — one year after Michelson’s categorical and, ultimately, erroneous words — Wilhelm Röntgen discovered X-rays and one year later Henri Becquerel discovered radioactivity, which no one knew how to make fit into the seemingly strong, solid and closed construction of known physics, which we now call “classical physics.” Ultimately, prediction is risky. In the same vein, let us take a look at some scientific discoveries in physics that were genuine surprises.

The first is the discovery of the expanding universe. In November 1915, Albert Einstein (1879–1955) completed the relativistic theory of gravity — the General Theory of Relativity — that he had been pursuing for years. He then decided to apply it to the universe as a whole, to build a relativistic cosmology. Faced with the issue of finding a solution for the gravitational field equations that represented what he imagined the universe to be, he assumed that it was static and that matter was uniformly distributed in it. It is well known that his assumption that the universe was static forced him to modify these field equations for general relativity by introducing a cosmological constant, but what I would like to emphasise now is that he did not consider the possibility that the universe might not be static. Nor was such a possibility seriously considered by Russian mathematician and physicist Aleksander Friedmann, American mathematician Howard Robertson or English mathematician Arthur Geoffrey Walker — who found solutions for relativistic field equations that implied expanding universes. They all believed that these mathematical solutions were not in line with physical reality. The only one who did take it seriously was the Belgian priest and physicist Georges Lemaître (1894–1966), in an article entitled “A homogeneous universe of constant mass and increasing radius, which explains the radial velocity of extra-galactic nebulae,” published in 1927 in the journal Annales de la Société Scientifique de Bruxelles. But he received neither the support nor the attention of his colleagues.

It was American astronomer Edwin Hubble (1889–1953) who discovered the now-famous expansion of the universe, a finding he published in 1929. Thus, this phenomenon was not predicted, even though there was a theoretical basis that would have permitted it. Instead, it was discovered through observation. The same can be said about astronomical discoveries, such as the discoveries of pulsars (Jocelyn S. Bell 1967) and quasars (from “quasi-stellar source”), radio sources revealing a large redshift whose existence was confirmed in the early 1960s. By contrast, black holes were predicted, by analysing a solution for the equations of general relativity (the Schwarzschild solution), even if many scientists doubted their existence (Newtonian equivalents had been proposed — and quickly forgotten — long before, first by the British astronomer John Michel in 1783, and then by Laplace in 1795). Some time passed before their existence was confirmed through experiment, but this has happened in the twenty-first century. Binary systems have been found, one of which appears to be a black hole: V404 Cygni, consisting of a star with two thirds the mass of the sun and a black hole that is 12 solar masses. Another major surprise has been the observational discovery that only about 3 percent of the universe consists of ordinary mass, while 30 percent is an unknown type of matter (called “dark matter”) and 67 percent is a form of energy that is also unknown (“dark energy”).

We cannot discuss the possibility of predicting the scientific future without observing that noone predicted quantum mechanics, the branch of physics behind much of the current technologised world. And this should not be surprising, since it is a theory whose foundations include sections as surprising as: (a) the description of physical objects by means of wave functions (defined in the field of complex numbers) whose square does not represent the subsequent history of the object in question, but rather the probability that it follows one particular history; (b) the collapse of the wave function (through the act of measuring/observing, one is likely to select one particular part of the wave function, i.e. of the reality that unfolds), and (c) Heisenberg’s uncertainty principle.

In recent times — especially in certain areas of theoretical physics — it is not unusual to find articles discussing the future, speculating about it. This tendency was reinforced by the inaugural lecture given by Stephen Hawking (b. 1942) on 29 April 1980 as the new Lucasian professor. The shrewd (at least in terms of publicity) physicist gave the talk the attractive title Is the End in Sight for Theoretical Physics? Similarly, in a chapter entitled “Predicting the Future” in his book The Universe in a Nutshell, Hawking (2001) wrote: “The human race has always wanted to control the future, or at least to predict what will happen. That is why astrology is so popular.”2 It is therefore not surprising that when a group of physicists decided to celebrate Hawking’s sixtieth birthday, they chose the theme “The Future of Theoretical Physics and Cosmology” (Gibbons, Shellard and Rankin 2003). Indeed, the book that resulted from this gathering contained chapters such as: “Our Complex Cosmos and its Future” (Martin Rees), “The Past and Future of String Theory” (Edward Witten) and “The Future of Cosmology: Observational and Computation Prospects” (Paul Shellard).3

One modern-day scientist who has devoted part of his time to predicting the scientific future is the high-energy physicist Michio Kaku (b. 1947), author of a somewhat-successful book Visions: significantly subtitled How Science Will Revolutionize the 21st Century (Kaku 1998). It would be interesting to analyse this book, like others of its kind, as the different papers included in the proceedings of a conference held at the beginning of the twenty-first century to consider what we can expect from the science and technology of the new millennium (Sanchez Ron 2002), but the aim of this article is not so much to address what the future will be like — not, of course, using written texts as general and as recent as Kaku’s — but rather, by drawing on the history of science, to study predictions of the future of science made by scientists in the past. In a sense, such an aim brings to mind the quote from William Shakespeare’s The Tempest at the beginning of this paper: “Whereof what’s past is prologue, what to come / In yours and my discharge,” which perhaps we can understand as history being a tool to predict the future, since its purpose is to analyse the prologue of what is to come.4

The evolution of the species, Erasmus or Charles Darwin?

The previous examples come from physics, the most mathematicised of the sciences and also the one that most quickly (apart from mathematics itself) had predictive theoretical systems. Here is an example taken from another scientific world: the world of natural sciences. Specifically, the question I want to raise is whether the concept of the evolution of the species was one of those predictions of future events I am considering. To do this, I will draw on Erasmus Darwin (1731–1802), who was a successful doctor, in addition to being a poet, philosopher and botanist, and his famous grandson, Charles Darwin (1809–1882). It is well known that Charles produced the theory of the evolution of the species, which he presented in On the Origin of Species by means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (1859), one of the greatest books of human history. Now, as we are often reminded, his grandfather Erasmus was one of the precursors of evolutionary theory. The basis for this claim may be found in a passage from his book, Zoonomia; or the Laws of Organic Life (1794–1796), a curious combination of facts and insights that contains paragraphs like this:

“Would it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which The great first cause endued with animality, with the power of acquiring new parts, attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end!” (Section XXXIX, “On Generation,” 4.8).

The question is to what extent we should consider that Erasmus predicted the existence of the evolution of the species. In my opinion, his prediction is not too different from that of the Greek atomists such as Leucippus or Democritus (fifth and fourth centuries BC), who held that matter is composed of atoms, i.e., indivisible particles, a thesis that Lucretius (99–55 BC) presented in his long poem, De Rerum Natura (On the Nature of Things). The atom that ultimately produced twentieth-century physics bears little resemblance to that imagined by the Greek atomists, and in the same way — though perhaps on a lesser scale — the idea of evolution advocated by Erasmus Darwin does not resemble the one that his grandson laboured to produce. One of the points in support of the Darwinian theory of the evolution of the species was the struggle for existence that Charles took from the economist Thomas Robert Malthus, as the latter had set out in his 1826 work, An Essay on the Principle of Population. None of this appears in the writings of Erasmus Darwin, nor does the extensive and detailed collection of data that supported the ideas of his grandson.

And, since I am dealing with Charles Darwin, I will mention one of his predictions (i.e., visions of the future) that the subsequent development of biology seems to confirm: the prediction that all living beings present on Earth, just as their predecessors, come from a single, common, primitive life form. Charles was cautious on this point, but also clear. Thus, he wrote in The Origin of Species (Darwin 1859, 488–490):

“When I view all beings not as special creations, but as the lineal descendants of some few beings which lived long before the first bed of the Silurian system was deposited, they seem to me to become ennobled […] Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.”5

Hilbert on the future problems of mathematics

A different kind of prediction and one much more successful than those I have mentioned so far is that used by the great German mathematician David Hilbert (1862–1943) in the famous lecture he gave on Wednesday, 8 August 1900 at the Second International Congress of Mathematicians in Paris. The talk was entitled “On the Future Problems of Mathematics” and it began as follows:

“Who would not volunteer to lift the veil that hides the future in order to take a look at the progress of our science and the secrets of its further development in future centuries? In a field as fruitful and vast as that of mathematical science, what are the specific goals that the guides of the mathematical thinking of the future generations will try to meet? What will be the new truths and the new methods discovered in this field in the new century?

History teaches the continuity of the evolution of science. We know that every age has its problems, which the next age will resolve or set aside as sterile, replacing them with others. If we want to predict the probable development of mathematical science in the near future, we need to review the unresolved issues and bring our attention to the problems identified at present that we expect the future to resolve” (Hilbert 1902, 58).

Hilbert was actually addressing the issues in mathematics that were unresolved at that time. He took his extraordinary mathematical knowledge both of the state his discipline was in and of its previous history and applied it to a selection of 23 problems he considered central to the future of his discipline. As noted by mathematics historian Jeremy Gray (2000), Hilbert’s intentions were not simply “to lift the veil that separates us from the future, but help shape and direct that future. With his prestige and that of his University behind them — and Hilbert worked at the most powerful centre for mathematics in the world [the Institute of Mathematics of the University of Göttingen] — the problems he posed were always likely to be at the forefront of mathematical research, and so they became.”6 The history of twentieth-century mathematics cannot, in fact, be explained without considering the problems selected by Hilbert in 1900. Of course, not all mathematical research was directed along the lines he chose, but it is undeniable that to some extent his criteria formed the future, because in science a good part of the future is the effort used in solving problems within — to quote Thomas Kuhn (1962) — the “dominant paradigm.” Returning to Gray (2000): “Of course some mathematicians have had no interest in these Problems; there is a great deal else to be done. Some mathematicians’ contributions have been forgotten […] Some problems have looked more exciting than others — that is only natural. But the list of those who have tackled the Problems contains many major mathematicians in the twentieth century. It is enough to cite some of the number theorists: Gelfond, Siegel, Artin, Takagi and Hasse. The names of Dehn, Bernstein, Koebe and Birkoff also are illustrious, and the Problems that survived to the 1950s and 60s drew the attention of Paul Cohen, Kolmogorov, Arnold and Zariski among others.”

What Hilbert’s conference teaches, and the place it occupies in the history of mathematics, is that at least a portion of the future — often most of it — is marked and in this respect occupied by the forecasts made by some about this future. Most of the time we emphasise how hitherto unpredictable developments — the scientific revolutions — determine the future. There are obviously good reasons to highlight this dimension of the future, but as the case of Hilbert shows us, this is not the only influence on the future. In fact, this type of bond that so heavily influences the future does not arise solely from mechanisms, such as the one exemplified by Hilbert. It also springs from others that are, shall we say, “institutional.” A notable example in this area is provided by the post World War II support given by the US government (more specifically, its Department of Defense and, at the top, the Pentagon) to a number of physics disciplines: prioritising nuclear and electronic energies. This support had a decisive influence on the direction of research in physics, favouring the emergence of theoretical and experimental constructions as prominent as the Standard Model for high-energy physics. Without large particle accelerators — originally built in the US with the financial support of the military — would it have been possible to make discoveries as novel as that of quarks, or of the maser and the laser? In my opinion, it would not have been, or at least, or not so soon.7

Although the parallel is not complete, this dependence on the past — a stronger dependence than in other cases (we are always children of the past) — reminds me of what the Italian historian, sociologist and political scientist Benedetto Croce wrote in 1938, “Historical culture aims to keep alive the awareness that human society has of its own past, i.e., of its present, i.e., of itself; to supply whatever it needs for the path that it must choose, to make available whatever, on its par t, may be of ser vice to it in the future” ([1938] 1992, 183).

Technological predictions

Getting a glimpse of the future is more feasible in the field of technology than in science, as aptly pointed out by one of the great gurus of nanotechnology, Eric K. Drexler (b. 1955) in his well-known book Engines of Creation: The Coming Era of Nanotechnology, published in 1986. Drexler wrote:

“Predicting the contents of new scientific knowledge is logically impossible, because it makes has no sense to claim to already know the facts that we learn in the future. Predicting the details of future technology, by contrast, is only difficult. Science is aimed at knowing, but engineering aims at doing; this allows the engineer to discuss future achievements without it being paradoxical. They can develop their devices in the world of the mind and the computer, before cutting the metal or even having defined all the details of a design. Scientists commonly recognise this difference between scientific prediction and technological prediction: they easily make technological predictions about science. Scientists can (and do) predict the quality of the images of the rings of Saturn from Voyager, for example, though not their surprising contents” (1993, 72).

Of course, even though technological predictions are more feasible, we can also find long lists of errors, as the distinguished aerodynamicist Theodore von Kármán (1881–1963) warned in an article published in 1955, entitled precisely “The Next Fifty Years,” in which he cited the following prediction that had first appeared in a 1908 article in the journal Engineering News: “It is impossible to imagine that the air transport of cargo and passengers could enter into competition with surface transport. The field of navigation is therefore limited to military and sporting applications; while the latter are almost certain, those of the military are still questionable.” (von Kármán [1955] 1975, 325).

In any case, there are many examples that show that it is indeed safer to make technological, rather than scientific, predictions. One source of significant predictions that refer to the digital world in which we live, the medium of the so-called information and globalisation society, is Nicholas Negroponte (b. 1943), founder and director of the Media Lab at the Massachusetts Institute of Technology, where he has been a professor since 1966. In 1995, he published a book — Being Digital — which, in retrospect, was spot on regarding much of what was to come. He wrote that, “As we interconnect with each other, many of the values of a nation-state will give way to those of both larger and smaller electronic communities. We will socialize in digital neighbourhoods in which physical space will be irrelevant and time will play a different role.” (Negroponte 1995, 5). He also asserted that, “In the next millennium, we will find that we are talking as much or more with machines than we are with humans […] Miniaturization will make this omnipresence of speech progress faster than in the past. Computers are getting smaller and it is very likely that tomorrow we will be wearing on our wrists what today we have on our desks and what yesterday took up an entire room.” (Ibid., 176–177). Furthermore that:

“The next decade will see cases of intellectual-property abuse and invasion of our privacy. We will experience digital vandalism, software piracy and data thievery. Worst of all, we will witness the loss of many jobs to wholly automated systems, which will soon change the white-collar workplace to the same degree that it has already transformed the factory floor. The notion of lifetime employment at one job has already started to disappear […] As we move toward such a digital world, an entire sector of the population will be or feel disenfranchised. When a fifty-year-old steelworker loses his job, unlike his twenty-five-year-old son, he may have no digital resilience at all. When a modern-day secretary loses his job, at least he may be conversant with the digital world and have transferable skills.” (Ibid., 269–271).

Predicting the contents of new scientific knowledge is logically impossible, because it makes has no sense to claim to already know the facts that we learn in the future. Predicting the details of future technology, by contrast, is only difficult.

Naturally, not all of his predictions have proved correct — or at least, not yet — but, for many others, our only objection can that the future he imagined arrived sooner than he assumed it would.

Another example is provided by Eric Drexler, whom I have already cited above. In Engines of Creation he predicted what most consider today to be the new, right-around-the-corner, scientific-technological revolution.8 His book asserts that, “Advancing technology may end or extend life, but it can also can change its quality. Products based on nanotechnology will permeate the daily lives of people who choose to use them. Some consequences will be trivial; others may be profound.” (Drexler 1993, 304–305). He continues:

“Some products have effects as ordinary as simplifying housekeeping (and so substantial as reducing the causes of domestic quarrels). It should be no great trick, for example, to make everything from dishes to carpets self-cleaning, and household air permanently fresh. For properly designed nanomachines, dirt would be food.

Other systems based on nanotechnology could produce fresh food – genuine meat, grains, vegetables, and so forth – in the home, year-round. These foods result from cells growing in certain patterns in plants and animals; cells can be coaxed to grow in these same patterns elsewhere. Home food growers will let people eat ordinary diets without killing anything. The animal rights movement (the forerunner of a movement to protect all conscious, feeling entities?) will be strengthened accordingly.

Nanotechnology will make possible high-resolution screens that project different images to each eye; the result will be three-dimensional television so read that the screen seems like a window into another world […] Nanotechnology will make possible vivid art forms and fantasy worlds far more absorbing than any book, game or movie.

Advanced nanotechnology will make possible a whole world of products that will make modern conveniences seem inconvenient and dangerous. Why should objects not be lightweight, flexible, durable and cooperative? Why can the walls not look like we want and transmit only the sounds we want to hear? Why should buildings and cars crush or roast their occupants? For those who so desire, the environment of daily life can resemble some of the most extravagant descriptions found in science fiction.”

Something similar could be — and is — said about the role of nanotechnology in the medicine of the future.9

Although Drexler became one of the major prophets of nanotechnology, the true pioneer of the thought that led to it was one of the greatest physicists of the nineteenth century – one particularly loved and admired by his colleagues – Richard Feynman (1918–1988). In a lecture entitled “There’s Plenty of Room at the Bottom,” delivered at the annual meeting of the American Physical Society on 29 December 1959 (twenty-seven years before Drexler published Engines of Creation), Feynman alerted scientists to the possibility of and interest in working on dimensions that were much smaller than were common at the time.10 His lecture began as follows:

“I would like to describe a field in which little has been done, but in which an enormous amount can be done in principle. This field is not quite the same as the others in that it will not tell us much of fundamental physics (in the sense of, “What are the strange particles?”) but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations. Furthermore, a point that is most important is that it would have an enormous number of technical applications. What I want to talk about is the problem of manipulating and controlling things on a small scale.” (Feynman 1960, 22).

The scales considered by Feymann reached the atomic level: “I am not afraid to consider the final question as to whether, ultimately — in the great future — we can arrange the atoms the way we want; the very atoms, all the way down! What would happen if we could arrange the atoms one by one the way we want them (within reason, of course; you can’t put them so that they are chemically unstable, for example).” (Ibid., 34).

Arranging atoms one at a time is just what nanotechnology has done and it is its very foundation. Of course in order to achieve this, there was something needed that Feynman also asked for in his lecture: microscopes that were better than the electronic ones then available. And they did arrive: in 1981, two physicists working at IBM laboratories in Zurich — Gerd Binning and Heinrich Roher — developed the scanning tunnelling microscope, an instrument that can image surfaces at atomic level. Without it, nanotechnology would still be a vague, barely defined dream, as it was when Feynman gave his famous lecture. And without him, Drexler could not have written his book.

These facts lead us to the following observation: even though technological predictions may be more certain, they need a scientific basis: Drexler’s predictions about nanotechnology needed quantum mechanics and the scanning tunnelling microscope, while Negroponte’s required quantum mechanics and the transistor. Therefore we can understand predictions that in our modern era of mobile phones amaze us, such as the one ventured in 1897 by William Edward Ayrton (1847–1908), Professor of Electrical Engineering and Applied Physics from 1884 until his death. He spoke at the City Guilds Central Technical College in London, before the British Imperial Institute:

“There is no doubt that the day will come, maybe when you and I are forgotten, when copper wires, gutta-percha coverings, and iron sheathings will be relegated to the Museum of Antiquities. Then, when a person wants to telegraph to a friend, he knows not where, he will call an electromagnetic voice, which will be heard loud by him who has the electromagnetic ear, but will be silent to everyone else. He will call “Where are you?” and the reply will come, “I am at the bottom of the coal-mine” or “Crossing the Andes” or “In the middle of the Pacific.” (Ayrton 1884: 548, quoted in Marvin 1988, 157).

In fact, Ayrton’s speculations were based on the support provided him by the new, electromagnetic world that had emerged from the work of Faraday, Maxwell and Marconi, among others.

Science versus technology

Before we continue — and since I have been talking about technological predictions when my initial intentions were to address scientific predictions — the intimate relationship between science and technology should be highlighted. It is possible to provide much evidence in favour of such a connection, which is often underestimated, arguing that the former is the basic science that, when applied, becomes technology (applied science), a connection which — if true — would make technology subordinate to science. But this is not the case, at least, not always. An authoritative example of this is that of thermodynamics: the branch of physics that deals with heat exchanges. Thermodynamics was born largely as a reflection on the functioning and possible improvement of the steam engines that led to the Industrial Revolution (see the classical work published by Sadi Carnot in 1824: Réflexions sur la puissance motrice du feu et sur les machines propres a développer cette puissance).

Among those who understood the dual and dynamic relationship between science and technology is William Thomson (1824–1907), better known as Lord Kelvin. He passed with fortune and pleasure through both domains, improving each. In a lecture at the Institution of Civil Engineers on 3 May 1883, Thomson noted: “There cannot be a greater mistake than that of looking superciliously upon the practical application of science. The life and soul of science is its practical application, and just as the great advances in mathematics have been made through the desire of discovering the solution of problems which were of a highly practical kind in mathematical science; so in physical science many of the greatest advances that have been made from the earnest desire to turn the knowledge of the properties of matter to some purpose useful to mankind.” (1891, 86–87).

Translated into the question of whether technological predictions may have an effect on the future of science, I would have to say yes, they can have positive effects. The development of visionary technology programmes could entail having to solve unforeseen scientific problems, which benefits science. In the case of nanotechnology, for example, its development helps drive the study of macroscopic quantum effects, which for decades has barely been addressed.

A long-cherished dream: artificial intelligence

One of the oldest dreams that have long been cherished by humanity is that of creating intelligent machines (robots or otherwise). In his Ars Magna (1315), Ramon Llull (1232–1315) expressed the idea that reasoning could be artificially implemented in a machine; and how can one forget the efforts of Charles Babbage (1791–1871), who designed the first programmable machine even if despite his efforts he was never able to build one that worked satisfactorily? However, I will not hark back that far and instead I will limit myself to recalling some of the ideas and predictions of three of the most outstanding scientists of the twentieth century: Norbert Wiener (1894–1964), John von Neumann (1903–1957) and Alan Turing (1912–1954).

In an article published in 1936, Turing introduced the so-called “Turing machine,” a theoretical contraption from which is derived the “universal Turing machine,” a Turing machine that can emulate any other Turing machine. If there is any hope of getting machines to be “intelligent” in the sense that their reasoning and the results provided are indistinguishable from those of humans, these will be some type of computer and — since their operation is ultimately based on the Turing machine models — we can see that Turing certainly had something to do with the field of artificial intelligence.

Meanwhile, one of John von Neumann’s many achievements was in the field of computers, to which he contributed with fundamental ideas on storage devices for instructions and data (“von Neumann architecture”) that are used by almost all computers. He put these ideas into practice, contributing to the efforts that led to the construction (1944–1945) of ENIAC (Electronic Numerical Integrator and Computer), and then subsequently directing the design and manufacture of another computer — JOHNNIAC — which became operational in 1952. Another of von Neumann’s contributions, which he presented in a lecture at Princeton in 1948, was an axiomatic theory of self-reproduction (“The General and Logical Theory of Automata”), general enough to encompass both organisms and machines (von Neumann [1948] 1966).

As for Norbert Wiener, surely it would suffice to say that he is known as the “father of cybernetics” (Wiener 1948), a discipline that can be defined as “the science of communications and automatic control systems in both machines and living things”.

With these shallow and incomplete introductions out of the way, let us look at some of the predictions made by these three scientists in chronological order, starting with von Neumann. To do this, we will make use of the valuable testimony of the physicist and mathematician permanently installed at the Institute for Advanced Study in Princeton, Freeman Dyson (b. 1923). In an article devoted precisely to the future of science, Dyson (2011) recalled some of the Hungarian mathematical genius’s ideas on the future of computers, taking advantage of the fact that he was at the Princeton Institute in the 1940s and 1950s when von Neumann was working on computers. Dyson noted that one of the aspects of computers of most interest to von Neumann was their application to meteorology, and that he thought that as soon as atmospheric fluid dynamics could be simulated on a computer with sufficient accuracy, it would be possible to determine if the weather situation at a given time was stable or unstable. If it were stable, its future evolution could be predicted and, if unstable, it would be possible to introduce small perturbations to control its subsequent behaviour, e.g. via aircraft carrying smoke generators that could warm or cool the atmosphere. However, this prediction by von Neumann turned out to be completely erroneous for the simple reason — unknown at the time — that weather systems are chaotic in the sense discovered by Edward Lorenz (1917–2008) in 1963: small perturbations like those von Neumann sought to introduce into the atmosphere would only make their future behaviour even more unpredictable (let us recall that famous line by Lorenz, “The flap of a butterfly’s wings in Brazil can set off a tornado in Texas.”).11 In other words, the future progress of science can ruin our predictions, including those of such outstanding scientists as von Neumann.

Another failed prediction by the Hungarian mathematician from Princeton referred to the size and number of future computers. He thought they would become increasingly larger and more expensive. He stated: “It is possible that in later years the machine sizes will increase again, but it is not likely that 10 000 (or perhaps a few times 10 000) switching organs will be exceeded as long as the present techniques and philosophy are employed. To sum up, about 104 switching organs seem to be the proper order of magnitude for a computing machine.” (von Neumann 1948: 13, cited in Dyson. 2012, 303). According to one story — probably apocryphal — he was once asked how many computers would be needed in the US in the future, to which he replied, “Eighteen.” I need not dwell on how wrong he was. The transistor dramatically changed the size, price and potential of the old vacuum tube computers: in 2010 one could buy a computer with a billion transistors (i.e., 10 000·105). And it should not go unnoticed that the transistor was invented by John Bardeen, Walter Brattain and William Shockley in 1947 — during von Neumann’s lifetime, when he was working on computers. Consequently, it is not only the future advances in science that can ruin our predictions, it is also quite possible that we do not know how to appreciate the consequences of developments that take place right next to us and during our own lifetime.

More aware of the possibilities opened up by the new electronics was Norbert Wiener who, in an informative book that was published in 1950 — The Human Use of Human Beings — predicted that monitoring equipment and, in particular, electronics that worked by feedback processes would lead to a second industrial revolution within just a few years. In a later article, he explained: “This second revolution would differ from the great industrial revolution at the beginning of the 19th century which replaced power as generated by men and by draft animals by the power of the machine; the second industrial revolution would replace human discrimination in its low levels by a discrimination initiated by mechanical sense organs and carried out by the mechanical equivalent of brains – that is, by machines made up of consecutive switching devices mostly of electronic character.” (Wiener 1953; Masani 1985, 666). As the machines that he was thinking of were digital (“Electronic computers are particularly adapted to the scale of two” he wrote in the same article), there is no doubt that, although he could not imagine the specifics, Wiener foresaw the digital revolution we have been living for some time. Nevertheless, he was very wary of imagining that the growing skills of these electronic machines could be confused with the skills of humans: “[There is a] great obstacle to the extension of the mechanical age of communication and the automatic age of control to fields involving what used to be known as the ‘higher human faculties’. It does not mean that there is anything absolutely different in nature between the human and the non-human, but merely that the performance of a non-human link in human relations can only be evaluated in human terms.” (Wiener 1953; Masani 1985, 670–671).

Less cautious was Turing, who ventured to make a statement about when it could be argued that machines that actually thought had been built. The appropriate reference in this regard is an article he published in 1950 in the philosophy journal Mind, entitled “Computing Machinery and Intelligence” (Turing 1950), in which he wrote:

“I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play an imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated thinking will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Copeland 2004, 449).12

More than sixty years after Turing wrote his article, machines that think like humans have not yet arrived, but he was cautious enough for us to accept that there are now machines that approach having intelligence to the extent he had suggested. In my opinion, there is no doubt that, with their work and their predictions, von Neumann and particularly Turing and Wiener favoured not only the arrival of modern computers, but also the establishment of “artificial intelligence” (a term coined in 1955 by John McCarthy) as a field of great interest. In this sense, they influenced the future.

Social needs and predicting the future

There is probably no safer way to make a prediction about the future than by identifying current acute needs that must be resolved in the years or decades to come with the help of science and technology. An outstanding example of this type of prediction is provided by the aforementioned Freeman Dyson in his book The Sun, the Genome, and the Internet (1999), which was the outcome of a series of lectures he gave at the New York Public Library in the spring of 1997. Its central theme was “a model of the future whose driving forces are the Sun, the Genome and the Internet” (Dyson 2000, 17). Actually, they were not difficult predictions: as the twentieth century came to an end, it was obvious that the on-going molecular-biological revolution — the revolution of the double helix of the DNA, of recombinant DNA, the genome, cloning and stem cells — would radically alter our chances to influence living organisms (it was already doing so). In fact, before this scientific knowledge, thought was given to what advances in the biomedical sciences might mean for human nature and the mechanisms of reproduction. A well-known example is that of the novel by Aldous Huxley (1894–1963), Brave New World (1932), which describes a future world made up of immutable castes resulting from progress in the fields of biology, psychology and physiology. When one reads Huxley’s subsequent book, Brave New World Revisited, in which he reviewed his predictions of 26 years earlier, one can see how — thanks to our knowledge of the genome — we are getting much closer to being able to do what, in 1931, Huxley assumed could be done:

“In the Brave New World of my fantasy eugenics and dysgenics were practiced systematically. In one set of bottles biologically superior ova, fertilized by biologically superior sperm, were given the best possible prenatal treatment and were finally decanted as Betas, Alphas and even Alpha Pluses. In another, much more numerous set of bottles, biologically inferior ova, fertilized by biologically inferior sperm, were subjected to the Bokanovsky Process (ninety-six identical twins out of a single egg) and treated prenatally with alcohol and other protein poisons. The creatures finally decanted were almost subhuman; but they were capable of performing unskilled work and, when properly conditioned, detensioned by free and frequent access to the opposite sex, constantly distracted by gratuitous entertainment and reinforced in their good behaviour patterns by daily doses of soma, could be counted on to give no trouble to their superiors.” (1958, Chapter II)

Fortunately, though, instead of Betas, Alphas, Alphas Pluses or nearly subhuman beings, the science of molecular biology speaks of genetic engineering or gene therapies with very different goals.

By the late twentieth century, it was clear that the Internet was an unstoppable wave that would radically alter our ways and possibilities. Equally obvious was the need for future energy resources that would replace coal and oil, and that the radiation emitted by the Sun was the obvious and safer replacement. In fact, this is something that had been understood much earlier. In a paper published in 1876 in the Revue des Deux Mondes — translated into English soon after for Popular Science Monthly — Louis Laurent Simonin (1830–1886), Professor of Geology stated: “Future generations, after the coal-mines have been exhausted, will have recourse to the sun for the heat and energy needed in manufacture and in domestic economy.” (Simonin 1876, 557–558). Simonin described a steam engine model that could apparently produce these effects, bar one that is now essential for us: the production of electricity from sunlight by means of cells and solar panels based on the photoelectric effect explained by Einstein in 1905.

Similarly, but now speaking explicitly of electricity, the British geneticist and evolutionary biologist John B. S. Haldane (1892–1964) wrote in his 1923 work, Daedalus, or Science and the Future:

“As for providing mechanical strength, it is obvious that the exhaustion of our coal and our oil is only a matter of centuries […]. Water power is not a probable substitute, on account of its small quantity, seasonal and sporadic distribution. It may perhaps, however, shift the centre of industrial gravity to well-watered mountainous tracts such as the Himalayan foothills, British Columbia, and Armenia: Ultimately we shall have to tap those intermittent but inexhaustible sources of power, the wind and the sunlight. The problem is simply one of storing their energy in a form as convenient as coal or petrol […]. Even to-morrow a cheap, fool-proof, and durable storage battery may be invented, which will enable us to transform the intermittent energy of the wind into continuous electric power.

Personally, I think that four hundred years hence the power question in England may be solved somewhat as follows: The country will be covered with rows of metallic windmills working electric motors which in their turn supply current at a very high voltage to great electric mains. At suitable distances, there will be great power stations where during windy weather the surplus power will be used for the electrolytic decomposition of water into oxygen and hydrogen. These gases will be liquefied, and stored in vast vacuum jacketed reservoirs, probably sunk in the ground. […] In times of calm, the gases will be recombined in explosion motors working dynamos which produce electrical energy once more, or more probably in oxidation cells. Liquid hydrogen is weight for weight the most efficient known method of storing energy, as it gives about three times as much heat per pound as petrol.” (Haldane 2005, 41).

Wind turbines — the electricity-producing windmills Haldane thought of — are increasingly common across the world, and perhaps (or so some say) in the not-very- distant future, hydrogen will also be a widely used source of energy.

We might also recall a book that was very successful when it was published in 1964 — Engineers’ Dreams — by frustrated rocket engineer (and subsequent science writer) Willy Ley (1906–1969). One of the dreams Ley addressed was “power from the sun.” He wrote that: “Making gasoline out of sunshine is a procedure requiring three major steps along with the three basic raw materials. Step no. 1 would be the familiar one of converting sunshine into electric current by means of collectors, boilers, and generators. Step no. 2 would be the use of the electric current for decomposing water into its two constituent elements, hydrogen and oxygen. Step no. 3 would be the conversion of the hydrogen into the substance known to chemists as hydrocarbons (gasoline is one of them), taking the carbon from the carbon dioxide of the air.” (Ley 1964, 184–185). Most pertinently, he went on to say that: “Most of the difficulty lies in the third step, and the main reason for the difficulties is that there is so little carbon dioxide in the atmosphere — only 0.03 percent of the total. Since the known industrial chemical processes resulting in hydrocarbons require reasonable pure carbon dioxide to work well, it is first necessary to concentrate it out of the air. This is not difficult, merely tedious and expensive […] because the carbon-dioxide content of the air near the ground is only 0.003 percent (carbon dioxide is virtually absent higher up), a million cubic feet of air must be processed for every gallon of gasoline produced.”

Today, we see the difficulty Ley mentioned very differently, taking into account the increase in the levels of carbon dioxide that has occurred in the atmosphere due to industrial processes and the massive use of automobiles. Any procedure to remove carbon dioxide from the atmosphere is welcome and encouraged. At times our predictions about the future are thus affected.

Science fiction and science of the future

The term “science fiction” has appeared but a couple of times in this article. In principle, this is reasonable because the predictions that appear in works of science fiction need not be scientific, and they may go beyond what is easily imaginable regardless of whether implementation may be near, far away or impossible. For example, the highly celebrated Jules Verne (1828–1905) and H. G. Wells (1866–1946) imagined spaceflight and submarines, alien invasions and atomic weapons, but not an innovation such as the car, which would end up dominating virtually all societies. Despite all this, we must not underestimate this genre when analysing what was ventured in the past about the scientific future.13 Consider, for example, that the physicist and one of the pioneers and promoters of a nuclear project in the late 1930s and first half of 40s, Leo Szilard (1898–1964), read Wells’ 1914 novel, The World Set Free, the first work to predict atomic bombs. The physicist read the book in 1932, the year the neutron was discovered and one year before he himself would have the idea of a chain reaction that would produce an atomic explosion. Indeed, the utility of at least some works of science fiction can be defended, drawing on what many consider the first modern novel of the genre: Frankenstein, or, the Modern Prometheus by Mary Wollstonecraft Godwin, better known as Mary Shelley (1797–1851). I will quote and comment on the following passage in which Victor Frankenstein — the novel’s main protagonist — reflects in the following terms:

“When I found so astonishing a power placed within my hands, I hesitated a long time concerning the manner in which I should employ it. Although I possessed the capacity of bestowing animation, yet to prepare a frame for the reception of it, with all its intricacies of fibres, muscles, and veins, still remained a work of inconceivable difficulty and labour. I doubted at first whether I should attempt the creation of a being like myself, or one of simpler organization; but my imagination was too much exalted by my first success to permit me to doubt of my ability to give life to an animal as complex and wonderful as man. The materials at present within my command hardly appeared adequate to so arduous an undertaking, but I doubted not that I should ultimately succeed. I prepared myself for a multitude of reverses; my operations might be incessantly baffled, and at last my work be imperfect, yet when I considered the improvement which every day takes place in science and mechanics, I was encouraged to hope my present attempts would at least lay the foundations of future success.” (Wollstonecraft-Shelley 1831).

The point I want to make is that scientists should read passages like this one (just like others in the aforementioned Brave New World, which has become current due to the development of molecular biology and the possibilities that this has opened up — genetic engineering, cloning), because they raise social issues that, while addressed by researchers, taken on different, deeper dimensions when addressed by great writers. And it is not just about social or ethical issues, but also about being able — as perhaps was the case of Wells — to present possibilities to scientists so that they ask themselves questions about their scientific basis, whether they are possible, or merely speculations without any further justification other than literary.

The last example I will offer is that of the novel by Isaac Asimov (1920–1992), I, Robot (1950). The famous Three Laws of Robotics that he included in this work may be a good guide if the predictions about robots with artificial intelligence come true:

  1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2.  A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Interdisciplinarity and small science

To finish, I will go somewhat beyond the role that I have tried to adopt throughout this article: the role of the notary who reflects on what has happened in the past with respect to scientific and technological predictions, a notary who makes — of course — comments here and there and tries to guide his clients. Now I am going to make a personal reading of what the history of science says about what scientific research will be like in the future, not the developments it

will produce, but rather how scientific research itself will be carried out. There are two guidelines that I believe will — with increasing intensity — lead science throughout this century and those to come. The first is interdisciplinarity, the meeting of groups of experts — not necessarily very numerous — in different scientific and technological disciplines who, provided with enough general knowledge to be able to understand each other, will collaborate to resolve new problems, problems that, by their nature, require this kind of collaboration. Remember that nature is one and recognises no borders. We are the ones who have established borders for practical reasons, constituting disciplines we call physics, chemistry, biology, mathematics, geology, and so on. But as we advance in our knowledge of nature, it becomes increasingly necessary to go beyond these boundaries and to become citizens of interdisciplinarity.14

The second guideline is what I call “Small Science” (as opposed to “Big Science”). This means research based on smaller groups compared to the colossal Big Science projects of the past 75 years, such the high energy experimental physics projects with their giant accelerators, NASA’s planetary research or the Human Genome Project in its initial design. Big Science is too expensive and too slow, even though interesting results can be obtained. Consider, for example, high-energy physics. Clearly, with large particle accelerators, we have made fundamental progress in our knowledge of the structure of matter, but it is no less obvious that fewer and fewer countries are able to afford their costs. The most powerful nation in terms of science and technology, the US, which had pioneered the construction of these accelerators, was also the

There are two guidelines that I believe will — with increasing intensity — lead science throughout this century and those to come. The first is interdisciplinarity, […] The second guideline is what I call “Small Science” (as opposed to “Big Science”). This means research based on smaller groups compared to the colossal Big Science projects of the past 75 years.

first to realise the difficulties of this kind of research, cancelling the project for a Superconducting Super Collider that US high-energy physicists believed essential to continuing the development of the standard model. It was going to consist of an 84-metre-long tunnel, inside which thousands of superconducting magnetic coils would guide two proton beams so that, after millions of rounds, they would reach an energy twenty times higher than that achieved in existing accelerators. At various points along the ring, the protons of the two beams would collide, and huge detectors would monitor what happens. The cost of the project — which would have lasted ten years — was initially estimated at 6 billion dollars. After an eventful life and with part of the infrastructure work already done (the excavation of the tunnel), on 19 October 1993, following a long, difficult and ever-changing congressional debate both in the House and the Senate, Congress cancelled the project. Europe is one of the enclaves in which this type of Big Science still exists, as shown in the Large Hadron Collider (LHC) at CERN, the pan-European institution dedicated to high-energy physics, which in 2012 has detected the much-pursued Higgs boson. But how much longer can Europe maintain this expense that stretches over decades in order to achieve results? Another manifestation of these difficulties is the delay, if not the cancellation, of some of NASA’s most beloved projects, such as sending astronauts to Mars. And also how projects much smaller than the Human Genome Project are obtaining better and faster results (it is true that this is thanks to the tools available, although it can be argued that the greater shortage of resources has encouraged small groups to devise faster and cheaper procedures). As Freeman Dyson has written (2011), “The future of science will be a mixture of large and small projects, with the large projects getting most of the attention and the small projects getting most of the results […] As we move into the future, there is a tendency for the big projects to grow bigger and fewer. This tendency is particularly clear in particle physics, but it is also visible in other fields of science, such as plasma physics, crystallography, astronomy, and genetics, where large machines and large databases dominate the scene. But the size of small projects does not change much as time goes on, because the size of small projects is measured in human beings […] Because the big projects are likely to become fewer and slower while the small projects stay roughly constant, it is reasonable to expect that the relative importance of small projects will increase with time.”

Is this how it will be, or is this one more prediction that will not pass the test of time? Only the future, of course, will tell.

Bibliography

Ayrton, William E. 1897.“Sixty Years of Submarine Telegraphy.” The Electrician 38: 545–548.

Chiao, R. Y., et al. (eds.). 2011. Visions of Discovery. Cambridge: Cambridge University Press.

Copeland, Jack B. (ed.). 2004. The Essential Turing. Oxford: Oxford University Press.

Croce, Benedetto. (1938) 1992. La historia como hazaña de la libertad. México: Fondo de Cultura Económica.

Darwin, Charles. 1859. On the Origin of Species. London: John Murray.

Darwin, Erasmus. 1796. Zoonomia, or the Laws of Organic Life I. London: J. Johnson.

Drexler, K. Eric. 1993. La nanotecnología. Barcelona: Gedisa. Originally published in English in 1986 as Engines of Creation. Garden City, NY: Anchor Press.

Dyson, Freeman. 1999. The Sun, the Genome & the Internet. New York: Oxford University Press.

Dyson, Freeman. 2011. “The Future of Science.” In R. Y. Chiao et al. (eds.), Visions of Discovery. Cambridge: Cambridge University Press, 39–54.

Dyson, George. 2012. Turing’s Cathedral. New York: Pantheon Books.

Feynman, Richard. 1960. “There’s Plenty of Room at the Bottom: An Invitation to Enter a New Field of Physics.” Engineering and Science 23 (5): 22–36.

Forman, Paul. 1987. “Behind Quantum Electronics: National Security as Basis for Physical Research in the United States.” Historical Studies in the Physical and Biological Sciences 18: 149–229.

Forman, Paul. 2002. “What the Past Tells Us about the Future of Science.” In J. M. Sánchez Ron (ed.), La ciencia y la tecnología ante el Tercer Milenio I. Madrid: España Nuevo Milenio, 27–37.

García Gual, Carlos (ed.). 2005. Viajes a la Luna. De la fantasía a la ciencia-ficción. Madrid: Biblioteca ELR Ediciones.

Gibbons, G. W., E. P. S. Shellard, and S. J. Rankin (eds.). 2003. The Future of Theoretical Physics and Cosmology. Cambridge: Cambridge University Press.

Godwin, Francis. 1638. The Man in the Moone, or a Discourse of a Voyage thither, by Domingo Gonsales. London: John Norton.

Gray, Jeremy J. 2000. The Hilbert Challenge. New York: Oxford University Press.

Grossman, Jennifer. 2012. “Nanotechnology in Cancer Medicine.” Physics Today (August): 38–42.

Haldane, John B. S. 2005. Dédalo o la ciencia y el future. Oviedo: KRK.

Hawking, Stephen. 2001. The Universe in a Nutshell. New York: Bantam Books.

Heilbron, J. L. (ed.). 2003. The Oxford Companion to the History of Modern Science. Oxford: Oxford University Press.

Hilbert, David. 1902. “Sur les problèmes futurs des mathématiques.” In E. Duporcq (ed.), Compte rendu du deuxiéme congrès international des mathématiques, tenu a Paris du 6 au 12 aout 1900. Paris: Gauthier-Villars.

Kaku, Michio. 1998. Visions: How Science will Revolutionize the 21st Century. New York: Anchor Books.

von Kármán, Theodore. 1955. “The Next Fifty Years.” Interavia 10 (1): 20–21.

von Kármán, Theodore. 1975. Collected Works of Theodore von Kármán 1952–1963. Rhode-St-Genèse, Belgium: Von Kármán Institute for Fluid Dynamics.

Kepler, Johannes. 2001. El sueño o la astronomía de la Luna. Edited by Francisco Socas. Huelva: Servicio de Publicaciones de la Universidad de Huelva.

Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions Chicago: University of Chicago Press.

Lagemann, Robert. 1959. “Michelson on measurement.” American Journal of Physics 27: 182–184.

de Laplace, Pierre Simon. 1902. A Philosophical Essay on Probabilities. Translated by F. W. Truscott. New York: John Wiley & Sons.

Ley, Willy. 1964. Engineers’ Dreams. New York: The Viking Press.

Lorenz, Edward. 1993. The Essence of Chaos. Seattle: University of Washington Press.

Marvin, Carolyn. 1988. When Old Technologies Were New. Oxford: Oxford University Press.

Masani, P. (ed.). 1985. Norbert Wiener: Collected Works IV. Cambridge, Mass.: MIT Press.

Michelson, Albert Abraham. 1894. “Some of the Objects and Methods of Physical Science.” University of Chicago Quarterly Calendar 111 (2, August): 12–15.

Millikan, Robert. 1951. The Autobiography of Robert A. Millikan. London: Macdonald.

Negroponte, Nicholas. 1995. El mundo digital. Barcelona: Ediciones B.

von Neumann, John. 1948. “The General and Logical Theory of Automata.” Reproduced in 1963 in John von Neumann Collected Works, edited by A. H. Taub, 288–318.

von Neumann, John. 1966. Theory of Self-Reproducing Automata. Edited by A. W. Burks, Urbana: University of Illinois Press.

Rees, Martin. 2003. Our Final Hour: A Scientist’s Warning. New York: Basic Books.

Sánchez Ron, José M. (ed.). 2002. La ciencia y la tecnología ante el Tercer Milenio, 2 vols. Madrid: España Nuevo Milenio.

Sánchez Ron, José M. 2007. El poder de la ciencia. Barcelona: Crítica.

Sánchez Ron, José M. 2011. La Nueva Ilustración. Ciencia, Tecnología y Humanidades en un mundo interdisciplinar. Oviedo: Ediciones Nobel.

Simonin, Louis Laurent. “Industrial applications of solar heat.” Popular Science Monthly 9 (September): 550–560.

Stevenson, Mark. 2011. Un viaje optimista por el future. Barcelona: Galaxia Gutenberg/Círculo de Lectores.

Taub, A. H. 1963. John von Neumann Collected Works V (Design of Computers, Theory of Automata and Numerical analysis). Oxford: Pergamon Press.

Thomson, William. 1891. “Electrical units of measurement.” In Popular Lectures and Addresses I (“Constitution of matter”). London: Macmillan, 80–143.

Turing, Alan. 1936. “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 42: 230–265.

Turing, Alan. 1950. “Computing machinery and intelligence.” Mind 59: 433–460. Reproduced in 2004 in The Essential Turing, edited by Jack B. Copeland. Oxford: Oxford University Press, 441–464.

Westfahl, Gary. 2003. “Science Fiction.” In J. L. Heilbron (ed.), The Oxford Companion to the History of Modern Science. Oxford: Oxford University Press, 735–737.

Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and the Machine. New York: John Wiley.

Wiener, Norbert. 1950. The Human Use of Human Beings. Boston: Houghton Mifflin.

Wiener, Norbert. 1953. “The Electronic Brain and the Next Industrial Revolution.” Cleveland Athletic Club Journal. Reproduced in 1985 in Norbert Wiener: Collected Works, edited by Masani, Cambridge, MA: MIT Press, 666–672.

Wollstonecraft-Shelley, Mary. 1831. Frankenstein, or, The Modern Prometeus. London: Henry Colburn & Richard Bentley.

Wren, Christopher. 1750. Parentalia or, Memoirs of the family of the Wrens […]. London: T. Osborn and R. Dodsley.

Notes

  1. The remarks are cited in this way in the corresponding article bearing his signature (Michelson 1894). Also see Lagemann (1959). Robert Millikan, who is also an American physicist and a Nobel Prize Laureate, offered a different vision of Michelson’s role. He wrote: “He gave the address on the place of very refined measurement in the progress of physics — an address in which he quoted someone else, I think it was Kelvin, as saying that it was probable that the great discoveries in physics had all been made, and that future progress was likely to be found in the sixth place of decimals.” (Millikan 1951, 39–40). In any event, for my purposes here, it does not matter whether it was Michelson or Kelvin — an even more remarkable scientist than the American — who pronounced these words.
  2.  In the book, Hawking explains how information lost in the black holes may reduce our capacity to predict the future. He states, “The radiation from a black hole will carry away energy, which must mean that the black hole will lose mass and get smaller. In turn, this will mean that its temperature will rise and the rate of radiation will increase. Eventually the black hole will get down to zero mass. We don’t know how to calculate what happens at this point, but the only natural, reasonable outcome would seem to be that the black hole disappears completely. So what happens then to the part of the wave function inside the black hole and the information it contains about what had fallen into the black hole? […] Such loss of information would have important implications for determinism” (Hawking 2002, 121–122).
  3. Martin Rees, an outstanding member of the British scientific community (Royal Astronomer, Master of Trinity College, and president of the Royal Society from 2005 to 2010, and knighted by Queen Elizabeth with the title of Baron Rees of Ludlow), is one of the scientists who enjoy speculating about the future, as shown by one of his books: Our Final Hour, subtitled Is this our Final Century? (Rees 2003).
  4. I was fortunate enough to first encounter this quotation in an article by Paul Forman (2002). However, the interpretation of it is my own.
  5. Interestingly enough, Erasmus Darwin (1796: Section XXXIX, “On Generation,” 4.8) also speculated along similar lines: “Shall we then say that the vegetable living filament was originally different from that of each tribe of animals above described? And that the productive living filament of each of those tribes was different originally from the other? Or, as the earth and ocean were probably peopled with vegetable productions long before the existence of animals; and many families of these animals long before other families of them, shall we conjecture that one and the same kind of living filaments is and has been the cause of all organic life?”
  6. Gray’s book includes the text of Hilbert’s 1900 lecture in an appendix.
  7. I have addressed some of these questions in Sánchez Ron (2007, chapter 11). Also see Forman (1987).
  8.  In this respect, see a recent book by Mark Stevenson, An Optimist’s Tour of the Future, where one can read: “It may sometimes sounds like science fiction. But it could radically reshape our future.” (2011, 112). Nanotechnology and nanoscience deal with phenomena that are usually on the scale of 1 to 100 nanometres, with one nanometre being equal to a billionth of a metre (10-9 metres).
  9. For a recent example, see Grossman (2012).
  10. The meeting took place at the California Institute of Technology, where Feynman working, and it was published (Feynman 1960) in Engineering and Science, a quarterly journal founded in 1937 by Caltech’s Public Relations Office to promote science.
  11. Actually, the original phrase is somewhat different: “Predictability: Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?.” It appeared as the title for the first part of a conference given on 29 December 1972 in a session on the Global Atmospheric Research Program as part of the 139th Meeting of the American Association for the Advancement of Science. It was distributed as a press note and was only published a number of years later as an appendix to a book that Lorenz wrote, entitled The Essence of Chaos (Lorenz 1993).
  12. The “imitation game” to which he was referring consisted of having someone confront the problem of determining whether what was responding to his questions was a machine or a person. Naturally, he could not observe either of them directly, only their responses.
  13. On science fiction and the history of science, see Westfahl (2003).
  14. I have written a book addressing these issues (Sánchez Ron 2011).
Quote this content
Listening
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved