Created by Materia for OpenMind Recommended by Materia
20
Start Radical Innovations: an Economist’s View
Article from the book Frontiers of Knowledge

Radical Innovations: an Economist’s View

Estimated reading time Time 20 to read

Why is there no economics of science?

I would like to begin with this question: Why is there still no recognized discipline called “The Economics of Science”? After all, economics as a discipline has shown strong imperialistic tendencies in recent decades. It has successfully colonized many fields, but it has yet to colonize science. We now have an economics of education, an economics of health, an economics of voting behavior, an economics of marriage, an economics of divorce, and an economics of crime. With respect to the latter, the economics of crime, it turns out that crime pays, especially when, as is often the case, the likelihood of apprehension and punishment is low! As some indication of the elevated status of this kind of research, one of its most eminent practitioners (Gary Becker) was awarded a Nobel Prize in Economics.

Why then do we not now have an economics of science—or, rather, since it is now just beginning to happen, why did it take so long? This question is particularly pertinent in view of what we think we have long known about science. That is to say, it has long been an article of faith that scientific research yields huge economic benefits.

There is at least a partial answer that suggests itself to the question of why an economics of science has taken so long to emerge: economics is a discipline that studies the principles involved in achieving an efficient use of scarce resources. But to talk about efficiency of resource use requires an ability to make some explicit comparison of costs and benefits. Now, we do in fact know a great deal about the costs of linear accelerators, synchrotron radiation machines, Hubble telescopes, the mapping of the human genome, etc. Indeed, some years ago the US Congress decided to cancel the construction of a superconducting supercollider when the costs threatened to escalate up to $11 or $12 billion. (In fact, it cost well over $1 billion just to close down the project!)

But, while it is relatively straightforward to calculate the costs of conducting science, it is extraordinarily difficult to calculate the benefits. And if one insists on considering only the narrow economic benefits, it would be difficult to make any sort of case at all for some of the projects of so-called “Big Science.” (What purely economic case can be made for the Hubble Telescope?)

Now, it is of course true that the history of scientific research in the twentieth century was full of unexpected benefits that have flowed from scientific research. But the general acknowledgment of the likelihood of unanticipated benefits hardly constitutes a guide to determining the size of the annual public subsidy to science, or the allocation of a budget of given size among the many competing possible uses in different fields of science. In a nutshell, the uncertainties concerning the possible benefits of basic scientific research are simply immense, and it is difficult to make a rigorous application of economic principles in a realm where the benefits of resource use are essentially unmeasurable.

What also follows from what has been said is that, in order to think in a useful way about science and technology in modern society—including the society of the twenty-first century—it is necessary to acknowledge that one must unavoidably learn to live with a high level of uncertainty. Nevertheless, I would like to insist that this need not diminish the usefulness of economic analysis, at least so long as we do not harbor unreasonable expectations about what can be achieved by abstract reasoning alone. For economic analysis alone can never provide a neatly packaged solution to policy-making with respect to the extremely complex issues with which we are concerned. Nor should we expect it to do that. But it can be an invaluable guide in looking for significant cause-effect relationships, in trying to understand how institutions and incentives shape human behavior, and in attempting to make sense of an immense body of historical and empirical data that are available to serious scholars, and from which important lessons for policy-making and institution-building might be derived.

Institutional changes in the twentieth century

If one looks back upon the last one hundred years and asks what were the distinctive features that dominated the realm of economic activity, my first reply would be that it was the application of scientific knowledge and scientific methodology to a progressively widening circle of productive activities. But this statement, by itself, is not very informative. In fact, it can serve only as a platform from which to raise other, deeper questions: In precisely what ways has science played this role? Which aspects of the scientific enterprise have played the role, and under what circumstances? And what were the changes in the manner in which science was institutionalized in the course of this century that made the twentieth century so different from the nineteenth?

A dominant factor, of course, was that, in the years after the World War II, national governments in industrial countries became, in varying degrees, the patrons of scientific research, especially of basic research. In considerable measure this reflected the critical role that science, and scientists, had played in shaping the conduct and the outcome of the war, culminating with the horrific weapon forged by the Manhattan Project that brought the war in the Pacific to an abrupt conclusion. The Cold War served as a further, powerful rationale for massive government contributions to the support of science, which dwarfed all pre-war precedents. But there were also powerful and quieter forces at work.

It may be useful here to recall Alfred North Whitehead’s oft-quoted observation that: “The greatest invention of the nineteenth century was the invention of the method of invention.” (Whitehead 1925, 98.) The twentieth century, of course, was not only to inherit, but also to institutionalize, that method of invention. Whitehead understood that this invention involved the linking of scientific knowledge to the world of artifacts. But he also understood that this linkage was not easily achieved, because a huge gap typically exists between some scientific breakthrough and a new product or process. Although the sentence just quoted from Whitehead’s book, is well known, his subsequent observation is not, but deserves to be: “It is a great mistake to think that the bare scientific idea is the required invention, so that it has only to be picked up and used. An intense period of imaginative design lies between. One element in the new method is just the discovery of how to set about bridging the gap between the scientific ideas, and the ultimate product. It is a process of disciplined attack upon one difficulty after another.” (Whitehead 1925.)

What appears to matter more than the quality of a country’s basic science, as judged by the usual academic or Nobel Prize Committee criteria, is the extent to which the activities of the scientific community can be made to be responsive to the needs of the larger society. It is regrettable that this is a question that is not very much discussed, and is poorly understood. It is often obscured by much of the rhetoric of academic science, with its overwhelming emphasis on the importance of the independence and the autonomy of the individual scientist. The fact of the matter is that, in the course of the twentieth century, and with varying degrees of success, industrial societies have created increasingly dense networks of institutional connections between the conduct of scientific research and the needs of the larger social system.

Within the university world, this includes a number of engineering disciplines that emerged late in the nineteenth and twentieth centuries, such as electrical engineering, chemical engineering, aeronautical engineering, metallurgy, and computer science. Indeed, although it is not widely realized, in recent years government R&D expenditures at American universities devoted to the engineering disciplines have exceeded expenditures devoted to the physical sciences. Far and away the largest recipients in the most recent years were the life sciences, receiving more than 50% of federal financial support.

In addition to new academic disciplines, the other key institutional innovation of the twentieth century was, of course, the industrial research laboratory. These laboratories monitored frontier research within the university community and elsewhere, although for many years it was the application of relatively elementary scientific concepts and methodologies that dominated their contributions to industry. In the course of the century, however, and especially after the World War II, research at many of these laboratories became increasingly sophisticated. By 1992 the Directory of American Research and Technology counted about 12,000 non-government facilities that were active in some form of “commercially-applicable” scientific research. And, according to the National Science Foundation’s figures, more than 30% of all basic research in the US was financed by private industry.

The industrial research laboratory is essentially an institutional innovation in which the scientific research agenda is largely shaped by the needs of industrial technologies. The role of industrial scientists is to improve the performance and reliability of those technologies, as well as, of course, inventing entirely new ones. Thus, the industrial research laboratory has rendered science more and more an institution whose directions are increasingly shaped by economic forces and concentrated on the achievement of economic goals. Science has become gradually incorporated, in the course of the twentieth century, into a crucial part of the growth system that has propelled industrial societies along their long-term growth trajectories.

That growth system, in which technological change played a central role for two centuries, is now reinforced by a powerful scientific research capability that has strong public and private components, varying among countries according to their histories, cultures, their political systems and their current social priorities. For further details, see United Nations Industrial Development Organization, Industrial Development Report 2005, Capability building for catching-up, Historical, empirical, and policy dimensions (Vienna, 2005).

In addition to the institutional requisites, the successful exploitation of scientific knowledge has flourished best in industrial countries that have offered potential innovators ready access to capital as well as strong financial incentives, and have nourished and educated effective managerial and engineering cadres. Thus, nineteenth-century Czarist Russia produced many brilliant scientists and inventors, but their presence exercised a negligible impact in a society that lacked an adequate managerial, engineering and financial infrastructure. On the other hand, America’s emergence to a position of technological leadership in a number of industrial sectors, before the World War I, occurred in a period when its achievements in basic science were limited and, with few exceptions, of no great international consequence. In this respect America in the late-nineteenth and early-twentieth centuries bears some interesting resemblances to Japan in the second half of the twentieth century. Both countries managed to achieve rapid industrial growth with no more than a modest scientific base because of their great aptitude for borrowing and exploiting foreign technologies.

On the other hand, the relative stagnation of the British economy in the twentieth century has occurred in spite of continued brilliant performances at the scientific frontier. Until not very long ago the British scientific community continued to receive more Nobel Prizes per capita than the United States. But, at the same time, the British failed to maintain competitiveness even in many inventions that had originated in Britain—radar, the jet engine, penicillin, and the CT scanner. Moreover, the revolution in molecular biology that began with the discovery of the double helical structure of the DNA molecule in the 1950s was, to a remarkable degree, a British achievement—indeed, a Cambridge University achievement. Nevertheless, British firms played only a minor role in the emerging biotechnology industry, while there were several hundred biotechnology firms in the US, including the very small number of such firms that quickly enjoyed some degree of commercial success.

I wish to draw two conclusions. Looking over the entire course of the twentieth century, scientific achievement alone, however brilliant, was not readily translated into superior economic performance. Strong complementary institutions and incentives have been necessary, not the least of which has been venture capital firms. Moreover, when such institutions and incentives have been present, even a comparatively modest scientific capability has been sufficient to generate high levels of economic performance.

The endogeneity of science

I have argued that the institutional changes of the twentieth century have rendered science a more endogenous activity. I mean this in the specific sense that, where such institutional innovations have occurred, science has come to be more directly responsive to economic forces. But I must now expand upon a particular aspect of that observation. That is, I want to suggest that the research agenda of science has been more and more determined by the need to improve the performance of technologies that were already in existence. In the twentieth century world, science and technology have become intimately intertwined. Science has indeed come to play an expanding role in influencing the world of technology, but causality has worked in both directions: the scientific enterprise of the twentieth century also needs to be explained in terms of its responses to the needs, and the exigencies, of technology.

In fact, a major, neglected theme in twentieth century science is that prior progress in the technological realm has come to play a critical role in formulating the subsequent research agenda for science. The natural trajectory of certain technological improvements has served to identify and to define the limits to further improvement, which, in turn, has served as a focusing device for subsequent scientific research.

Consider the aircraft industry. In this industry, improved performance continually brought the technology—the aircraft—to performance ceilings that could be pierced only by understanding some aspects of the physical world better. As a result, the introduction of the turbojet had a profound impact upon science as well as upon the aircraft industry, by progressively pushing against the limits of scientific frontiers and by identifying the specific directions in which this new knowledge had to be further enlarged before additional technological improvements could occur.

Thus, the turbojet first led to the creation of a new specialty, supersonic aerodynamics, “…only to give way,” according to one authority, “to aerothermodynamics as increasingly powerful turbojets pushed aircraft to speeds at which the generation of heat on the surface of the aircraft became a major factor in airflow behavior. Eventually, turbojet powered aircraft would reach speeds at which magnetothermodynamic considerations would become paramount: [that is to say] temperatures would become so great that air would dissociate into charged submolecular ions.” (Constant1990, 240.) Thus, the greater speeds made possible by jet engines also required advancing the frontiers of scientific knowledge in order to be able to accommodate the design requirements of high speed jet aircraft.

I suggest that a central feature of high technology industries is that this kind of sequence has become prominent. That is, technological progress serves to identify, in reasonably unambiguous ways, the directions in which scientific research needs to be conducted, and at the same time it holds out the prospect of a large financial return should the research prove to be successful.

The mechanisms at work may take a variety of forms. In the case of the jet engine, functioning at increasingly high speeds, the technology pointed to specific natural phenomena in a specific environment. In the telephone industry, on the other hand, transmission over longer distances, or the introduction of new modes of transmission, have been particularly fruitful mechanisms in the generation of basic research. For example, in order to improve overseas transmission by radiotelephone it was essential to develop an expanded appreciation for the ways in which electromagnetic radiation interacts with various atmospheric conditions. Indeed, some of the most fundamental of all scientific research projects of the twentieth century have been direct outgrowths of the attempt to improve the quality of transmission of sound by telephone. Dealing with various kinds of interference, distortion or attenuation of electromagnetic signals that transmit sound has profoundly enlarged our understanding of the universe.

Two fundamental scientific breakthroughs, one by Karl Janskyin the late 1920s and another more recently by Penzias and Wilson, both occurred as a result of attempts to improve the quality of telephone transmission. This involved, specifically, dealing with sources of noise. Jansky had been put to work to deal with the problems of radio static after the opening up of the overseas radiotelephone service. He was provided with a rotatable radio antenna with which to wor. In 1932 he published a paper identifying three sources of noise: from local thunderstorms, from more distant thunderstorms, and a third source, which Jansky identified as “a steady hiss static, the origin of which is not known.” It was this “star noise,” as it was first called, that marked the birth of an entirely new science: radio astronomy, a discipline that was to prove one of the greatest sources of scientific advance of the twentieth century.

Jansky’s experience underlines one of the reasons why the attempt to distinguish between basic research and applied research is extremely difficult to carry out consistently. Fundamental breakthroughs often occur while dealing with very mundane or practical concerns. Attempting to draw that line on the basis of the motives of the person performing the research—whether there is a concern with acquiring useful information (applied) as opposed to a purely disinterested search for new knowledge (basic)—is, in my opinion, a hopeless quest. Whatever the ex ante intentions in undertaking research, the kind of knowledge actually acquired is highly unpredictable. This is in the nature of serious scientific research. Historically, some of the most fundamental scientific breakthroughs have come from people, like Jansky, who certainly thought that they were doing very applied research.

Bell Labs’ fundamental breakthrough in astrophysics was also directly connected to improving telephone transmission, and especially in the use of communication satellites for such purposes. At very high frequencies, rain and other atmospheric conditions became major sources of interference in transmission. This source of signal loss was a continuing concern in the development of satellite communication. It led to a good deal of research at both the technological and basic science levels—e.g., the study of polarization phenomena (Dinn l977, 236–242).

Arno Penzias and Robert Wilson first observed the cosmic background radiation, which is now taken as confirmation of the “big bang” theory of the formation of the universe, in 1964 while they were attempting to identify and measure the various sources of noise in their receiving system and in the atmosphere. They found that: “The radiation is distributed isotropically in space and its spectrum is that of a black body at a temperature of 3 degrees Kelvin.” (Fagen 1972, 87.) Although Penzias and Wilson did not know it at the time, the character of this background radiation was precisely what had been predicted earlier by cosmologists who had formulated the “big bang” theory. They subsequently received a Nobel Prize for this momentous finding.

There is, I am suggesting, a compelling internal logic to certain industries, e.g., the telephone system, that forcefully points the research enterprise in specific directions. Consider further some of the material needs of that system. The invention of the transistor and the discovery of the transistor effect were the results of a deliberate attempt to find a substitute for the vacuum tube in the telephone industry. The vacuum tube was unreliable and generated a great deal of heat. After the transistor had been invented, its actual production required standards of material purity that were quite without precedent for industrial purposes. Since transistor action was dependent on introducing a few foreign atoms to the semiconducting crystal, remarkably high standards of semiconductor purity had to be attained. Something of the order of a single foreign atom for each 100,000,000 germanium atoms meant that the telephone system simply had to attain levels of purity that presupposed a good deal of fundamental research into the structure and behavior of materials, especially crystallography.

The invention of the transistor in 1947 had an enormous impact on the direction of scientific research. Solid state physics had attracted only a very small number of physicists before the arrival of the transistor. In fact, before the World War II, it was a subject that was not even taught in most American universities. However, there was a huge redirection of scientific resources within a few years after the announcement of the transistor effect. In fact, within a matter of years, rather than decades, solid-state physics had become the largest subdiscipline of physics. The huge mobilization of scientific resources in this field, in universities as well as private industry, was clearly a response to the potentially high payoffs to such research that were signaled by the arrival of the transistor.

The growth of the telephone system also meant that equipment and components had to perform under extreme environmental conditions, from geosynchronous satellites to transatlantic cables. These extreme environmental conditions have one particularly important consequence: there are likely to be severe economic penalties for failing to establish very high standards of reliability. There are compelling reasons for the attainment and maintenance of high standards that are absent in, say, consumer electronics, not to mention a brick factory. The failure of a submarine cable, once placed on the ocean floor, involves extremely high repair and replacement costs in addition to a protracted loss of revenue. Similarly, communication satellites had to be remarkably reliable and strong simply to survive the act of being launched and placed into orbit. The instrumentation had to survive extremes of shock, vibration, temperature range, radiation, etc.

Thus, high standards of reliability are not a marginal consideration but the very essence of successful economic performance in this industry. This consideration had a great deal to do with the high priority that Bell Labs attached to materials research over a period of several decades. Important advances in polymer chemistry, for example, were achieved at Bell Labs in order to understand the morphology of polyethylene, because of premature failure of cable sheathing employing this material on the floor of the Atlantic Ocean.

The importance of high standards of reliability has also been a basic underlying condition in the thrust of research in other specific directions.

The decision to undertake a basic research program in solid state physics, which culminated in the development of the transistor, was strongly influenced, as suggested earlier, by these (as well as other) sources of dissatisfaction. But the transistor suffered from reliability problems of its own in its early years. These problems emerged in the early 1950s as the transistor experienced a widening range of applications. The defects were eventually linked to certain surface phenomena. As a result, a major research thrust into the basic science of surface states was undertaken that eventually solved the reliability problems but, in doing so, also generated a great deal of new fundamental knowledge in surface physics.

The development of optical fibers is particularly apposite to our present concerns. Although its attractiveness as a new mode of transmission was increased by space and congestion constraints, its feasibility was rooted in another set of technological breakthroughs of the 1950s. It was the development of laser technology that made it possible to use optical fibers for transmission. This possibility, in turn, pointed to the field of optics, where advances in knowledge could now be expected to have high financial payoffs. As a result, optics as a field of scientific research has had a great resurgence in the last few decades. It was converted by changed expectations, based upon past and prospective technological innovations, from a relatively somnolent intellectual backwater to a burgeoning field of scientific research. The causes were not internal to the field of optics but were based upon a radically altered assessment of new technological possibilities—which, in turn, had their roots in the earlier technological breakthrough of the laser.

This discussion has implications, I believe, that are of fundamental importance to an understanding of the economic role of science in the twentieth century. Although the impact of new scientific knowledge upon industry is continually emphasized in public discussions, very little attention is devoted to causal forces flowing in the opposite direction. But modern high technology industries set in motion immensely powerful forces that stimulate and influence scientific research. It does this in several ways: by providing observations, or formulating problems that could only have occurred in specific industrial contexts, such as the telephone or the aircraft industry; by providing new techniques of instrumentation that vastly enlarge the observational, measurement and calculating capabilities of the scientist; and most important of all, by raising the economic payoff to the performance of scientific research and therefore powerfully increasing the willingness of private industry to finance such research.

It should be understood that the remarkable accomplishments at Bell Labs in the twentieth century were by no means typical of other sectors of American industry—indeed it was quite unique in many respect—but many other American firms developed strong scientific capabilities of great economic value, an assertion that is reinforced by an earlier assertion that there were somewhere around 12,000 industry laboratories in the US in 1992.

A fair generalization is that American firms learned how to exploit scientific knowledge and methodology, and to link these forces through organization and incentives, and they managed these more successfully than did other OECD countries.

The increasingly multidisciplinary nature of research (and innovation)

There is another feature of the scientific enterprise that demands attention because of its important implications for the future. The multidisciplinary nature of research in the realms of both science and technology, increasingly apparent in the second half of the twentieth century, will doubtless intensify in the next century.

History suggests that the crossing of disciplinary boundaries is not something that usefully emerges from some kind of deliberate plan, strategy, or committee meeting; rather, it is something that occurs, when it does occur, because of the peculiar logic of scientific progress. It has happened, historically, when certain problems emerged at the frontier of a particular discipline, such as cell biology, that required a better understanding of the role of certain processes that were the specialty of scientists in a different discipline, e.g., chemistry. The outcome, biochemistry, has thus been a natural outgrowth of the changing requirements of an expanding body of research knowledge. Similarly, geophysics emerged as an independent subdiscipline of geology when it became possible to apply the methodologies, that had first developed in physics, to the understanding of the structure and the dynamics of the earth, as well as, eventually, other planets. Here, as on other occasions, the introduction of new technologies of instrumentation has led to a beneficial crossing of certain disciplinary boundary lines. The established lines between physics and chemistry have been crossed on a number of occasions in the past for similar reasons.

The increasing importance of the ability to exploit the knowledge and the methodologies of more than one discipline has become apparent not only at the level of basic science but in applied sciences and engineering as well. In recent years, medical science has benefited immensely, not only from such “nearby” disciplines as biology and chemistry, but from nuclear physics (magnetic resonance imaging, radioimmunoassays), electronics, and materials science and engineering. In pharmaceuticals there have been remarkable advances deriving from such fields as biochemistry, molecular and cell biology, immunology, neurobiology, and scientific instrumentation. These advances are moving toward the possibility that new drugs, with specific properties, can be targeted and perhaps one day even designed, in contrast to the randomized, expensive, and exhaustive screening methods that have characterized pharmaceutical research in the past (Gambardella l995). The new pattern of innovation is, by its very nature, highly multidisciplinary. Success requires close cooperation among an increasing number of specialists: chemists, biochemists, pharmacologists, computer scientists. What is most certain is that the biological sciences will play an increasingly pivotal role in drug discovery and development. This is also apparent in the emerging biotech industry, which is still in its infancy. This industry draws on many scientific disciplines, including cell biology, molecular biology, protein chemistry, and biochemistry.

This sort of close cooperation among specialists from different disciplines has already accounted for some of the most important breakthroughs of the last fifty years. The transistor was the product of cooperation among physicists, chemists, and metallurgists. The scientific breakthrough leading to the discovery of the structure of DNA was the work of chemists, biologists, biochemists and crystallographers. More productive rice varieties, that have transformed the population-carrying capabilities of the Asian continent, were originally developed at the International Rice Research Institute in the Philippines, through the combined efforts of geneticists, botanists, biochemists, entomologists, and soil agronomists.

The increasing value of interdisciplinary research creates serious organizational problems for the future. Such research often runs counter to the traditional arrangements, training, intellectual priorities, and incentive structures of the scientific professions, particularly in the academic world, where tremendous emphasis is placed upon working within well-recognized disciplinary boundary lines. Department-based disciplines have played a crucial role in teaching and research, and are certainly not to be discarded casually. Historically, disciplines emerged because, within their boundaries, there was a set of problems that could be solved by some common conceptualization, analytical framework or methodology. Workers within a discipline spoke a common language; and, not least important, the discipline provided a basis for forming judgments about the quality of research. In this respect, commitment to a particular discipline provided some standards for quality control.

Although great (and justifiable) concern is currently being expressed over the future financial support of universities, organizational issues may also become increasingly worrisome as a rigid departmentalism comes to confront a research frontier requiring more and more frequent crossing of traditional disciplinary boundaries. Such problems, it is worth observing, are not likely to be nearly so serious in private industry, where disciplinary boundaries do not loom nearly so large, and where the highest priorities are problem-solving, improving the performance of existing technology, and, ultimately, generating higher profits, regardless of the disciplinary sources through which these goals can be attained.

The persistence of uncertainty

There is a final issue that needs to be addressed, and that is the persistence of uncertainty, not only in the realm of science, where it is universally acknowledged, but in the realm of technology as well. We are accustomed to expect a high degree of uncertainty and unanticipated developments in the world of scientific research. It is widely assumed, however, that uncertainties decline as one moves across the spectrum of activities from basic research to applied research to product design and development and, finally, to the commercialization of the new product in the market place.

It is, of course, true that some uncertainties have been resolved after a new technological capability has been established, and even after its first acceptance in the market place, the questions change, and it is far from obvious that the new questions are any less complex than the old ones. The most fundamental of all questions is, to what social purposes will the new capability be put?

It appears that no one anticipated the invention of the Internet; rather, it simply “appeared” after a sufficient number of computers were in existence. As David Mowery observed in a fascinating article: “The Internet is the world’s largest computer network—a steadily growing collection of more than 100 million computers that communicate with one another using a shared set of standards and protocols. Together with the World Wide Web, a complementary software innovation that has increased the accessibility and utility of the network, the Internet stimulated a communications revolution that has changed the way individuals and institutions use computers in a wide variety of activities.” (Moweryand and Simcoe 2002.)

Consider the laser, an innovation that is certainly one of the most powerful and versatile advances in technology in the twentieth century, and one that may still be moving along a trajectory of new applications. Its range of uses in the fifty years since it was invented is truly breathtaking. This would include precision measurement, navigational instruments, and a prime instrument of chemical research. It is essential for the high quality reproduction of music in compact discs (CDs). It has become the instrument of choice in a range of surgical procedures, including extraordinarily delicate surgery upon the eye, where it has been used to repair detached retinas, and gynecological surgery where it now provides a simpler and less painful method for removal of certain tumors. It is extensively employed in gall bladder surgery. The pages of my manuscript were printed by an HP laser jet printer. It is widely used throughout industry, including textiles where it is employed to cut cloth to desired shapes, and metallurgy and composite materials where it performs similar functions. But perhaps no single application of the laser has been more profound than its impact on telecommunications where, together with optical fibers, it is revolutionizing transmission. The best transatlantic telephone cable in 1966 could carry only 138 simultaneous conversations between Europe and North America. The first fiber optic cable, installed in 1988, could carry 40,000. The fiber optic cables installed in the early 1990s can carry nearly 1.5 million conversations. And yet it is reported that the patent lawyers at Bell Labs were initially unwilling even to apply for a patent on the laser, on the grounds that such an invention, dealing with the realm of optics, had no possible relevance to the telephone industry. In the words of Charles Townes, who subsequently won a Nobel Prize for his research on the laser, “Bell’s patent department at first refused to patent our amplifier or oscillator for optical frequencies because, it was explained, optical waves had never been of any importance to communications and hence the invention had little bearing on Bell System interests.” (Townes 1968, 701.)

The transistor was, without doubt, one of the greatest achievements of the twentieth century—or, for that matter, any century. Consequently, one might expect to find the announcement of its invention, in December 1947, displayed prominently on the front page of the New York Times. Nothing of the sort. When it was finally mentioned in the Times, it appeared only as a small item buried deep in that newspaper’s inside pages, in a regular weekly column titled “News of Radio.” Hardly any future uses were mentioned beyond improved hearing aids.

This enumeration of failures to anticipate future uses and large markets for some of the most important inventions of the twentieth century—laser, computer, transistor—could be extended almost without limit. We could, if we liked, amuse ourselves indefinitely at the failure of earlier generations to see the obvious, as we see it today. But that would be, I believe, a mistaken conceit. I am not particularly optimistic that our ability to overcome the uncertainties connected with the uses of new technologies is likely to improve.

Similarly, a main reason for the modest future prospects that were being predicted for the computer in the late 1940s was that transistors had not yet been incorporated into the computers of the day. Introducing the transistor, and later integrated circuits, into computers were, of course, momentous events that transformed the computer industry. Indeed, in one of the most extraordinary technological achievements of the twentieth century, the integrated circuit eventually became a computer, with the advent of the microprocessor in 1970. The world would be a far different place today if computers were still operating with vacuum tubes.

Bibliography

Constant, Edward W. The Origins of the Turbojet Revolution. Baltimore: John Hopkins University Press, 1990, 240.

Dinn, Neil F. “Preparing for Future Satellite Systems,” Bell Laboratories Record, October 1977, 236–242.

Fagen, M. D., ed. Impact, Bell Telephone Laboratories, 1972, 87.

Gambardella, Alfonso. Science and Innovation in the US Pharmaceutical Industry. Cambridge University Press, 1995.

Mowery, David and Timothy Simcoe. “Is the Internet a US Invention?” Research Policy. December 2002. Vol. 31:1369–1387.

Townes, Charles. “Quantum Mechanics and Surprise in the Development of Technology.” Science, February 16, 1968, 701.

Whitehead, A. N. Science and the Modern World. Macmillan, 1925, 98.

Quote this content
Listening
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved