Created by Materia for OpenMind Recommended by Materia
Start The Trajectory of Digital Computing
Article from the book Frontiers of Knowledge

The Trajectory of Digital Computing

Estimated reading time Time 36 to read

No word has been overused when discussing computers as much as the word “revolution.” If one is to believe the daily press and television accounts, each new model of a chip, each new piece of software, each new advance in social networking, each new model of a portable phone or other portable device, will bring about revolutionary changes in our lives. A few weeks week later the subject of those reports is strangely forgotten, having been replaced by some new development, which we are assured, this time, is the real turning point.

Yet there is no question that the effect of computing technology on the daily lives of ordinary people has been revolutionary. A simple measure of the computing abilities of these machines, as measured by metrics such as the amount of data it can store and retrieve from its internal memory, reveals a rate of advance not matched by any other technologies, ancient or modern. One need not resort to the specialized vocabulary of the computer engineer or programmer: the sheer numbers of computers and digital devices installed in homes and offices or carried by consumers worldwide shows a similar rate of growth, and it is not slowing down. An even more significant metric looks at what these machines do. Modern commercial air travel, tax collection, medical administration and research, military planning and operations—these and a host of other activities bear an indelible stamp of computer support, without which they would either look quite different or not be performed at all.

An attempt to chronicle the history of computing in the past few decades faces the difficulty of writing amidst this rapid evolution. A genuine history of computing must acknowledge its historical roots at the foundations of civilization—which has been defined in part by the ability of people to manipulate and store symbolic information. But a history must also chronicle the rapid advances in computing and its rapid spread into society since 1945. That is not easy to do while maintaining a historical perspective. This essay identifies and briefly describes the essential persons, machines, institutions, and concepts that make up the computer revolution as it is known today. It begins with the abacus—first not only alphabetically but also chronologically one of the first computing instruments to appear. It carries events up to the twenty-first century, when networking of personal computing machines became commonplace, and when computing power spread to portable and embedded miniature devices.

Digital devices continue to evolve as rapidly as ever. But the personal computer in some ways has reached a plateau. The physical features of these machines have stabilized: a keyboard (descended from the venerable typewriter of the 1890s); a rectangular box containing the electronic circuits and disk storage; above that a display terminal (descended from the venerable television screen of the late 1940s). The electronic circuits inside, though more capable every year, have also stabilized: for the last 35 years they have consisted of integrated circuits made of silicon and encased in black plastic packages, mounted on plastic boards. Portable or “laptop” computers collapse this configuration but are essentially the same. Engineers and customers alike agree that this physical design has many drawbacks—consider for example the injuries to the muscles of the hand caused by over-use of a keyboard designed a century ago. But the many attempts to place the equivalent power, versatility, and ease of use onto other platforms, especially the portable phone, have not yet succeeded.

The programs that these computers run—the “software”—are still evolving rapidly. The things that these computers are connected to—the libraries of data and world-wide communications networks—are also evolving rapidly. It is not possible to anticipate where all that will lead. In the intervening time between the writing and publication of this essay, it is possible that the nature of computing will be transformed so much as to render parts of this study obsolete. Silicon Valley engineers talk of events happening in “Internet time”: about six years faster than they happen elsewhere. Even after stripping away some of the advertising hyperbole, that observation seems to be true.

There are at least four places where one could argue the story of computing begins. The first is the obvious choice: in antiquity, where nascent civilizations developed aids to counting and figuring such as pebbles (Latin calculi, from which comes the modern term “calculate”), counting boards, and the abacus—all of which have survived into the twentieth century (Aspray 1990).

But these devices were not computers as we normally think of that term. To the citizen of the modern age, computing machinery implies a device or assembly of devices that takes over the drudgery of calculation and its sister activity, the storage and retrieval of data. Thus the second place to start the story: the 1890s, when Herman Hollerith developed the punched card and a system of machines that summed, counted, and sorted data coded into those cards for the US Census. The Hollerith system came along at a critical time in history: when power machinery, symbolized by the steam engine and by steam or water-powered factories, had transformed production. That linking of energy to production created a demand to control it—not only physical control but also the management of the data that industrialization brought with it. Hollerith’s tabulator (and the company he founded, which formed the basis for the IBM Corporation) was but one of many such responses: others included electric accounting machines, cash registers, mechanical adding machines, automatic switching and control mechanisms for railroads, telephone and telegraph exchanges, and information systems for international commodity and stock exchanges.

But, the modern reader protests, that does not sound like the right place to start either. The real revolution in computing seems to have something to do with electronics—if not the silicon chips that are ubiquitous today, then at least with their immediate ancestors the transistor and vacuum tube. By that measure the computer age began in February, 1946, when the US Army publicly unveiled the “ENIAC”—“Electronic Numerical Integrator and Computer,” at a ceremony at the Moore School of Electrical Engineering in Philadelphia. With its 18,000 vacuum tubes, the ENIAC was touted as being able to calculate the trajectory of a shell fired from a cannon faster than the shell itself traveled. That was a well-chosen example, as such calculations were the reason the Army spent over a half-million dollars (equivalent to several million in current dollars) for an admittedly risky and unproven technique.

Another early machine that calculated with vacuum tubes was the British “Colossus,” of which several copies were built and installed at Bletchley Park in England during World War II, and used with great success to break German codes. These machines did not perform ordinary arithmetic as the ENIAC did, but they did carry out logical operations at high speeds, and at least some of them were in operation several years before the ENIAC’s dedication. Both the ENIAC and Colossus were preceded by an experimental device built at Iowa State University by a physics professor named John V. Atanasoff, assisted by Clifford Berry. This machine, too, calculated with vacuum tubes, but although its major components were shown to work by 1942, it was never able to achieve operational status (Burks and Burks 1988).

Once again, the reader objects: is it not critical that this technology not simply exists but also is prevalent on the desks and in the homes of ordinary people? After all, not many people—perhaps a few dozen at most—ever had a chance to use the ENIAC and exploit its extraordinary powers. The same was true of the Colossus computers, which were dismantled after the War ended. By that measure the “real” beginning of the computer revolution would not be in 1946 but in 1977, when two young men, Steve Jobs and Steve Wozniak, from an area now known as Silicon Valley, unveiled a computer called the “Apple II” to the world. The Apple II (as well as its immediate predecessor the “Altair” and its successor the IBM PC) brought computing out of a specialized niche of big businesses or the military and into the rest of the world.

One may continue this argument indefinitely. Young people today consider the beginning of the computer revolution even more recently, i.e., when the Internet first allowed computers in one location to exchange data with computers elsewhere. The most famous of these networks was built by the United States Defense Department’s Advanced Research Projects Agency (ARPA), which had a network (ARPANET) underway beginning in 1969. But there were others, too, which linked personal and mini-computers. When these merged in the 1980s, the modern Internet was born (Abbate 1999).

Actually there are many places to begin this story. As this is being written, computing is going through yet a new transformation, namely the merging of the personal computer and portable communications devices. As before, it is accompanied by the descriptions in the popular press of its “revolutionary” impact. Obviously the telephone has a long an interesting history, but somehow that story does not seem to be relevant here. Only one thing is certain: we have not seen the last of this phenomenon. There will be more such developments in the future, all unpredictable, all touted as the “ultimate” flowering of the computer revolution, all relegating the events of previous revolutions to obscurity.

This narrative begins in the 1940s. The transition from mechanical to electronic computing was indeed significant, and that transition laid a foundation for the phenomena such as personal computing that followed. More than that happened in those years: it was during the 1940s when the concept of “programming” (later extended to the concept of “software”) emerged as an activity separate from the design of computing machinery, yet critically important to that machinery’s use in doing what it was built to do. Finally, it was during this time, as a result of experience with the first experimental but operational large computing machines, that a basic functional design of computing machines emerged—an “architecture,” to use a later term. That has persisted through successive waves of technological advances to the present day.

Therefore, in spite of all the qualifications one must put on it to make it acceptable to academic historians, one may argue that the ENIAC was the pivot of the computer revolution (Stern 1981). That machine, conceived and built at the University of Pennsylvania during the World War II, inaugurated the “computer age.” As long as one understands that any selection is somewhat arbitrary and as long as one gives proper credit to earlier developments, including the work of Babbage and Hollerith, as well as the invention of the adding machine, cash register, and other similar devices, no harm is done.


An ability to count and to represent quantities in some kind of symbolic notation was common to nearly all cultures, however “primitive” they may have appeared to modern scholars. Physical evidence of that ability is much more difficult to obtain, unless a durable medium such as clay tablets was used. We know that the concept of representing and manipulating quantitative information symbolically by pebbles, beads, knots on a string, or the like arose independently throughout the ancient world. For example, Spanish explorers to the New World found the Inca Indians using a sophisticated system of knotted strings called quipu, while similar systems of knotted strings are mentioned in the Bible, and at least one—the rosary—survives to the present day. A highly abstracted version of representation by beads evolved into the abacus, of which at least three different forms survive in modern China, Japan, and Russia. In the hands of a skilled operator an abacus is a powerful, compact, and versatile calculating tool. Other related aids to calculating were also in use in Western countries by the Middle Ages. These included counting boards with grids or patterns laid on them to facilitate addition (from this comes the modern phrase “over the counter” trading), and tokens used on these boards (these survive as gambling chips used in casinos).

It is important to recognize that these devices were used only by those whose position in government, the Church, or business required it. With that qualification one could say these were in “common” use, but not in the sense of being ubiquitous. This qualification applies to all computing machines. The adoption of such machines depends on how costly they are, of course, but also crucially on whether they meet the needs of people. As Western society industrialized and became more complex those needs increased, but it is worth noting that even in spite of the steep drop in prices for computers and for Internet access, they have not achieved total penetration into the consumer market and probably never will.

Before moving on to calculating machinery it is worth noting one other aid to calculation that was in wide use and that survives in a vestigial form into the modern age. That is the printed table, which listed values of a mathematical function, for example. These can be traced back as far as the ancient Greeks, and they were extensively used by astronomers for their own use and, more importantly, for use by sailors on the open seas. Statistical tables, such as mortality rates for example, were developed for the insurance industry. Pocket calculators and “spreadsheet” computer programs allow one to compute these values on the spot, but tables still have their place. There are still a few places where one finds such tables in use. The continued use of tables shows their intimate connection with one of the fundamental uses of modern electronic computers (Kidwell and Ceruzzi 1994).

Most of the above devices worked in tandem with the Hindu-Arabic system of notation, in which a symbol’s value depends not just on the symbol itself (e.g., 1, 2, 3…) but also on its place (with the all-important zero used as a place holder). This notation was vastly superior to additive notations like Roman numerals, and its adoption by Europeans in the late Middle Ages was a significant milestone on the road to modern calculation. When performing addition, if the sum of digits on one column was greater than nine, one had to “carry” a digit to the next column to the left. Mechanizing this process was a significant step from the aids to calculation mentioned above to automatic calculation. A sketch and a fragmentary description contained in a letter to Johannes Kepler indicate that Professor Wilhelm Schickard of the German town of Tuebingen built such a device in the early 1600s. No pieces of it are known to survive.

In 1642 the French philosopher and mathematician Blaise Pascal invented an adding machine that has the honor of being the oldest known to have survived. Digits were entered into the calculator by turning a set of wheels, one for each column. As the wheels passed through the value of “9,” a tooth on a gear advanced the adjacent wheel by one unit. Pascal took care to ensure that the extreme case, of adding a “1” to a sequence of “9s,” would not jam the mechanism. Pascal’s machine inspired a few others to build similar devices, but none was a commercial success. The reasons for that have become familiar: on the one hand it was somewhat fragile and delicate and therefore expensive, on the other hand the world in which Pascal lived was not one that perceived such machines to be a necessity of life.

About thirty years later the German philosopher and mathematician Gottfried Wilhelm Leibniz, satirized in Voltaire’s Candide and famous as the cocreator of the Calculus, learned of Pascal’s invention and attempted to construct a calculator independently. He succeeded in building a machine that not only could add but also multiply, using a gear that engages a variable number of teeth depending on where the operator had set a dial. His calculator did not work well, but the “stepped-drum” became the basis for nearly all multiplying calculators until the late nineteenth century. One modern descendant, the Curta, was small enough to fit in a pocket and was produced and sold into the 1970s.

The onset of a more mercantile society with a growing middle class made conditions more favorable for commercial success. Around 1820, Charles Xavier Thomas, a pioneer in establishing the insurance industry in France, built and marketed his “Arithmometer,” which used the Leibniz stepped drum to perform multiplication. Sales were poor at first, but it became quite popular after 1870, selling about one hundred a year. By then industrialization was in full swing, and Thomas’s machine was joined by a number of rivals to meet the demand (Eames and Eames 1990).

These demands were met on both sides of the Atlantic. Two “adding machines” developed in the United States were especially significant. Neither was capable of multiplication, but ability to do rapid addition, their ease of use, modest (though not low) cost, and rugged construction more than compensated for that deficiency. In the mid-1880s Dorr E. Felt designed and patented an adding machine that was operated by pressing a set of number keys, one bank of digits for each place in a number. What was more, the force of pressing the keys also powered the mechanism, so the operator did not have to pause and turn a crank, pull a lever, or do anything else. In the hands of a skilled operator, who neither took her fingers away from nor even looked at the keyboard, the Felt “Comptometer” could add extremely quickly and accurately. Selling for around US$125, Comptometers soon became a standard feature in the American office of the new century. At around the same time, William Seward Burroughs developed an adding machine that printed results on a strip of paper, instead of displaying the sum in a window. His invention was the beginning of the Burroughs Adding Machine Company, which made a successful transition to electronic computers in the 1950s and after a merger with Sperry 1980s has been known as the Unisys Corporation.

In Europe calculating machines also became a standard office product, although they took a different tack. The Swedish engineer W. Odhner invented a compact and rugged machine that could multiply as well as add, using a different sort of gear from Leibnitz’s (numbers were set by levers rather than by pressing keys). That led to a successful product marketed under the Odhner, Brunsviga, and other names.

No discussion of computing machinery is complete without mention of Charles Babbage, the Englishman who many credit as the one who first proposed building an automatic, programmable computer—the famous “Analytical Engine.” He came to these ideas after designing and partially completing a more modest “Difference Engine,” which itself represented a great advance in the state of calculating technology of the day. Details of Babbage’s work will be given later, but he did in fact propose, beginning in the 1830s, a machine that had all the basic functional components of a modern computer: an arithmetic unit he called the “Mill,” a memory device he called the “Store,” a means of programming the machine by punched cards, and a means of either printing the results or punching answers onto new sets of cards. It was to have been built of metal and powered by a steam engine. Babbage spent many years attempting to bring this concept to fruition, but at his death in 1871 only fragments had been built.

How different the world might have looked had he completed his machine makes for entertaining speculation. Would we have had an Information Age powered by steam? But once again, as with Pascal and Leibniz, one must keep in mind that the world was not necessarily waiting for a computer to be invented. To have made a real impact, Babbage would not only have had to surmount the technical obstacles that dogged his Analytical Engine, he would also have had to exert considerable powers of salesmanship to convince people that his invention was of much use. Evidence for that view comes from the fact that the Swedes Georg and his son Edvard Scheutz completed a working Difference Engine in 1853, which is regarded as the world’s first successful printing calculator ever sold (Merzbach 1977). One of the machines was sold to the Dudley Observatory in Albany, New York, but the Scheutz Engine had little impact on science or commerce. The Information Age had to wait.

By the end of the nineteenth century the state of the art of calculating had stabilized. In the commercial world the simple Comptometer or Odhner had taken its place alongside other office equipment of similar scope, like the typewriter or telegraph ticker. In the world of science—still a small world in those years—there was some interest but not enough to support the construction of more than an occasional, special-purpose machine now and then. Those sciences that required reckoning, such as astronomy, made do with printed tables and with human “computers” (that was their job title) who worked with pencil, paper, books of tables, and perhaps an adding machine. A similar situation prevailed in the engineering professions: books of tables, supplemented by an occasional special-purpose machine designed to solve a special problem (e.g., the Tide Predictor, the Bush Differential Analyzer). After about 1900, the individual engineer might also rely on simple analog devices like the planimeter and above all the slide rule: an instrument of limited accuracy but versatile and sufficient for most of an engineer’s needs.

Herman Hollerith’s system of punched cards began as such a special-purpose system. In 1889 he responded to a call from the Superintendent of the US Census, who was finding it increasingly difficult to produce census reports in a timely fashion. The punched card and its accompanying method of coding data by patterns of holes on that card, and of sorting and counting totals and subtotals, fit the Bureau’s needs well. What happened next was due as much to Hollerith’s initiative as anything else. Having invented this system he was impatient with having a sole customer that used it only once a decade, and so embarked on a campaign to convince others of its utility. He founded a company, which in 1911 merged with two others to form the Computing-Tabulating-Recording Corporation. In 1924, upon the accession of Thomas Watson to the leadership position of C-T-R, the name was changed to International Business Machines. Watson was a salesman who understood that these devices had to meet customer’s needs in order to thrive. Meanwhile the Census Bureau, not wishing to rely excessively on one supplier, fostered the growth of a rival, Remington Rand, which became IBM’s chief rival in such equipment for the next half-century.

The ascendancy of punched card equipment looks in hindsight to have been foreordained by fate: its ability to sort, collate, and tabulate large amounts of data dovetailed perfectly with the growing demands for sales, marketing, and manufacturing data coming from a booming industrial economy. Fate of course was there, but one must credit Hollerith for his vision and Watson for his tireless promotion of the technology. When the US economy faltered in the 1930s, IBM machines remained as popular as ever: satisfying American and foreign government agencies’ appetites for statistical data. Watson, the quintessential salesman, furthermore promoted and generously funded ways of applying his company’s products to education and science. In return, some scientists found that IBM equipment, with minor modifications, could be put to use solving scientific problems. For astronomers like L. J. Comrie, punched card equipment became in effect a practical realization of Babbage’s failed dream. Other scientists, including the above-mentioned Atanasoff, were beginning to propose special-purpose calculators that could execute a sequence of operations, as the never-completed Babbage Analytical Engine was to do. These scientists did so against a background of IBM tabulators and mechanical calculators that came close to meeting the scientists’ needs without the trouble of developing a new type of machine (Eckert 1940).

Looking back on that era one sees a remarkable congruence between the designs for these programmable calculators and that of the never-completed Analytical engine. But only Howard Aiken, a professor at Harvard University, knew of Charles Babbage beforehand, and even Aiken did not adopt Babbage’s design for his own computer at Harvard. Babbage was not entirely unknown in the 1930s, but most historical accounts of him described his work as a failure, his Engines as follies. That was hardly a story to inspire a younger generation of inventors. Those who succeeded where Babbage had failed, however, all shared his passion and single-minded dedication to realize in gears and wire the concept of automatic computing. They also had a good measure of Thomas Watson’s salesmanship in them.

First among these equals was Konrad Zuse, who while still an engineering student in Berlin in the mid-1930s sketched out an automatic machine because, he said, he was “too lazy” to do the calculations necessary for his studies. Laziness as well as necessity is a parent of invention. As the Nazis plunged the world into war, Zuse worked by day at an aircraft plant in Berlin; at night he built experimental machines in his parents’ apartment. His “Z3” was running in December 1941; it used surplus telephone relays for calculation and storage, discarded movie film punched with holes for programming (Ceruzzi 1983).

In 1937 Howard Aiken, while working on a thesis in physics at Harvard, proposed building what eventually became known as the “Automatic Sequence Controlled Calculator.” His choice of words was deliberate and reflected his understanding that the punched card machine’s inability to perform sequences of operations limited its use for science. Aiken enlisted the help of IBM, which built the machine and moved it to Harvard. There, in the midst of World War II, in 1944, it was publicly dedicated. The ASCC thus has the distinction of being the first to bring the notion of automatic calculation to the public’s consciousness. (German spies also brought this news to Zuse, but by 1944 Zuse was well along with the construction of a machine the equal of Aiken’s.) The ASCC, or Harvard Mark I as it is usually called, used modified IBM equipment in its registers, but it could be programmed by a paper tape.

In 1937 George Stibitz, a research mathematician at Bell Telephone Laboratories in New York, built a primitive circuit that added number together using binary arithmetic—a number system highly unfriendly to human beings but well-suited to electrical devices. Two years later he was able to persuade his employer to build a sophisticated calculator out of relays that worked with so-called “complex” numbers, which arose frequently in the analysis of telephone circuits. The Complex Number Computer was not programmable, but during the World War II it led to other models built at Bell Labs that were. These culminated in several large, general-purpose relay computers. They had the ability not only to execute any sequence of arithmetic operations but also to modify their course of action based on the results of a previous calculation. This latter feature, along with electronic speeds (discussed next) is usually considered to be a crucial distinction between what we know today as “computers” and their less-capable ancestors the “calculators.” (In 1943 Stibitz was the first to use the word “digital” to describe machines that calculate with discrete numbers.)

Rounding out this survey of machines was the Differential Analyzer, built by MIT Professor Vannevar Bush in the mid-1930s. This machine did not calculate “digitally” to use the modern phrase, but worked on a principle similar to the “analog” watt-hour meter found at a typical home. In others respects the Bush Analyzer was similar to the other machines discussed above. Like the other pioneers, Bush had a specific problem to solve: analyzing networks of alternating current power generators and transmission lines. The Differential Analyzer was a complex assembly of calculating units that could be reconfigured to solve a range of problems. The demands of the World War II led to a number of these machines being built and applied to other, more urgent problems. One, installed at the Moore School of Electrical Engineering in Philadelphia, was an inspiration for the ENIAC.

All of these machines used either mechanical gears, wheels, levers or relays for their computing elements. Relays are electrical devices, but they switch currents mechanically, and so their speed of operation is fundamentally of the same order as pure mechanical devices. It was recognized as early as 1919 that one could design a circuit out of vacuum tubes that could switch much faster, the switching being done inside the tube by a stream of electrons with negligible mass. Tubes were prone to burning out, operating them required a lot of power, which in turn had to be removed as excess heat. There was little incentive to build calculating machines out of tubes unless their advantage in speed overcame those drawbacks.

In the mid-1930s John V. Atanasoff, a physics Professor at Iowa State University, recognized the advantages of tube circuits for the solution of systems of linear equations. This type of problem is found in nearly every branch of physics, and its solution requires carrying out large numbers of ordinary arithmetic operations plus the storage of intermediate results. With a modest university grant Atanasoff began building circuits in 1939 and by 1942 had a prototype that worked except for intermittent failures in its intermediate storage unit. At that point Atanasoff moved to Washington, D.C. to work on other wartime projects. He never finished his computer. At the same time in Germany, a colleague of Zuse’s named Helmut Schreyer developed tube circuits that he proposed as a substitute for the relays Zuse was then using. His proposal formed the basis of his doctoral dissertation, but aside from a few breadboard models little progress was made.

The first major, successful application of vacuum tubes to computing came in England, where a team of codebreakers, in ultra secrecy, developed a machine to assist with the decoding of intercepted German military radio traffic. Here was a clear case where electronic speeds were needed: not only were there many combinations of “keys” to consider, but the military value of an intercepted military message diminishes rapidly with time, often becoming utterly worthless in a few days. The first so-called “Colossus” was completed by 1943 (about the time the ENIAC was begun), and by war’s end there were ten in operation. Details of the Colossus remain secret, even after 65 years. But it has been revealed that although these machines did not perform arithmetic as a calculator did, they could and did perform logical operations on symbolic information, which is the heart of any electronic processing circuit today.

The ENIAC, built at the University of Pennsylvania and unveiled to the public in February 1946, belongs more to the tradition of the machines just described than to the general purpose electronic computers that followed. It was conceived, proposed, and built to solve a specific problem—the calculation of firing tables for the Army. Its architecture reflected what was required for that problem, and it was an architecture that no subsequent computers imitated. Only one was built. And though the end of the war reduced the urgency to compute firing tables, military work dominated the ENIAC’s schedule throughout its long lifetime (it was shut down in 1955). In the 1940s computing was advancing on a number of fronts. The examples mentioned above were the most prominent, but behind them were a host of other smaller yet also significant projects.

The metaphor of linear progress (i.e., using the term “milestone”) is inappropriate. Advances in computing in the 1940s were more like an army advancing across broken terrain. The ENIAC, by virtue of its dramatic increase in arithmetic speeds, pushed the “calculating” function of computing machines way ahead of the other functions of computers, such as the storage of data or the output of results. These now had to scurry to catch up. Of those other functions, none appeared as a greater hindrance than the one of supplying the processor with instructions. John Mauchly said it succinctly: “Calculations can be performed at high speed only if instructions are supplied at high speed.” So while it was being built, the ENIAC revealed to its creators the need for internal, electronic storage of instructions. Every machine has “software”: a set of procedures by which it is properly used. Before electronics, the speeds of machinery were commensurate with human beings. Only with the electronic computer is there this bifurcation, and that is the truly “revolutionary” nature of the digital age. The ENIAC, by virtue of its high arithmetic speeds, brought programming to the fore. (It is no coincidence that the term “to program” a computer came from the ENIAC team.)

The ENIAC is thus in the ironic position of being a pivot of history because of its shortcomings as well as its capabilities. It was not programmed but laboriously “set up” by plugging wires, in effect rewiring the machine for each new job. That meant that a problem that took minutes to solve might require several days to set up. By contrast, the ENIAC’s electromechanical cousins, like the Harvard Mark I, might be programmed in a few hours but take days to run through the equations.

Even as the ENIAC was taking shape in the early 1940s its designers were thinking about what the machine’s successor would look like. The ENIAC team was in hindsight perfectly suited to the task: it included people with skills in electrical engineering, mathematics, and logic. Out of their discussions came a notion of designing a computer with a dedicated memory unit, one that stored data but did not necessarily perform arithmetic or other operations on its contents. Instructions as well as data would be stored in this device, each capable of being retrieved or stored at high speeds. That requirement followed from the practical need for speed, as Mauchly stated above, as well as the engineering desire to have the memory unit kept simple without the extra complication of partitioning it and allocating space for one or the other type of data.

From that simple notion came much of the power of computing that followed. It has since become associated with John von Neumann, who joined the ENIAC team and who in 1945 wrote a report about the ENIAC’s successor, the EDVAC, in which the notion is explained. But clearly it was a collaborative effort, with the ENIAC then under construction as a backdrop.

All the advantages of this design would be for naught if one could not find a reliable, cheap, and fast memory device of sufficient capacity. Eckert favored using tubes of mercury that circulated acoustic pulses; von Neumann hoped for a special vacuum tube. The first true stored-program computers to operate used either the mercury tubes or a modified television tube that stored data as spots of electrical charge (Randell 1975). These methods offered high speed but were limited in capacity and were expensive. Many other designers opted to use a much slower, but more reliable, revolving magnetic drum. Project Whirlwind, at MIT, broke through this barrier when in the early 1950s its team developed a way of storing data on tiny magnetized “cores”—doughnut shaped pieces of magnetic material (Redmond and Smith 1980).

Generations: 1950-1970

Eckert and Mauchly are remembered for more than their contributions to computer design. It was they, almost alone in the early years, who sought commercial applications of their invention, rather than confining it to scientific, military, or very large industrial uses. The British were the first to develop a computer for commercial use: the LEO, a commercial version of the EDSAC computer, built for the catering company J. Lyons & Company, Ltd. And it was in use by 1951. But like Babbage’s inventions of the previous century, the British were unable to follow through on their remarkable innovation (Bird 1994). In the United States, Eckert and Mauchly faced similar skepticism when they proposed building computers for commercial use, but they were eventually able to succeed although losing their independence in the process. Given the engineering difficulties of getting this equipment to operate reliably, the skepticism was justified. Nevertheless, by the mid-1950s Eckert and Mauchly were able to offer a large commercial computer called the UNIVAC, and it was well received by the approximately twenty customers who acquired one.

Other companies, large and small, entered the computer business in the 1950s, but by the end of the decade IBM had taken a commanding lead. That was due mainly to its superior sales force, which ensured that customers were getting useful results out of their expensive investment in electronic equipment. IBM offered a separate line of electronic computers for business and scientific customers, as well as a successful line of smaller, inexpensive computers, like the 1401. By 1960 the transistor, invented in the 1940s, was reliable enough to replace the fragile vacuum tubes of an earlier day. Computer memory now consisted of a hierarchy of magnetic cores, then slower drums or disks, and finally high-capacity magnetic tape. Entering data and programs into these “mainframes” was still a matter of punching cards, thus ensuring continuity with the Hollerith equipment that was IBM’s foundation.

In 1964 IBM unified its product line with its “System/360,” which not only covered the full circle of science and business applications (hence the name), but which also was offered as a family of ever-larger computers each promised to run the software developed for those below it. This was a dramatic step that transformed the industry again, as the UNIVAC had a decade earlier. It was recognition that “software,” which began as almost an afterthought in the crush of hardware design, was increasingly the driving engine of advances in computing.

Following IBM in the commercial market were the “Seven Dwarfs”: Burroughs, UNIVAC, National Cash Register, Honeywell, General Electric, Control Data Corporation, and RCA. England, where the first practical stored-program computers operated in the late 1940s, also developed commercial products, as did France. Konrad Zuse, whose “Z3” operated in 1941, also founded a company—perhaps the world’s first devoted to making and selling computers. But with only minor exceptions, European sales never approached those of US firms. The Soviets, although competitive with the US in space exploration, could not do the same in computers. They had to content themselves with making copies of the IBM System/360, which at least gave them the advantage of all the software developed by others. Why the USSR lagged so far behind is a mystery, given its technical and especially mathematical excellence. Perhaps Soviet planners saw the computer as a double-edged sword, one that could facilitate State planning but also made possible decentralized sharing of information. Certainly the absence of a vigorous free-market economy, which drove the technical advances at UNIVAC and IBM, was a factor. In any event, free-market forces in the US were augmented by large amounts of money supplied by the Defense Department, which supported computing for so-called “command-and-control” operations as well as for logistics and on-board missile guidance and navigation.

The minicomputer and the chip

If computing technology had stood still in the mid-1960s, one would still speak of a “computer revolution,” so great would its impact on society have been. But technology did not stand still; it progressed at ever-greater rates. It took ten years for the transistor to come out of the laboratory and into practical commercial use in computers. That had an effect on the large mainframe systems already mentioned, but the transistor had an even bigger effect on smaller systems. Beginning around 1965, several new products appeared that offered high processing speeds, ruggedness, small size, and a low price that opened entirely new markets. The “PDP-8,” announced that year by a new company called Digital Equipment Corporation, inaugurated this class of “minicomputers.” A concentration of minicomputer firms emerged in the Boston suburbs. Both in people and in technology, the minicomputer industry was a direct descendant of the Defense Department funded Project Whirlwind at MIT (Ceruzzi 1998).

As computer designers began using transistors, they had to confront another technical problem, which in earlier years had been masked by the fragility of vacuum tubes. That was the difficulty of assembling, wiring, and testing circuits with thousands of discrete components: transistors, resistors, and capacitors. Among the many proposed solutions to this interconnection problem were those from Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor, who each filed for patents in 1959. Their invention came to be known as the “integrated circuit.” Drawing on the base of knowledge built up on silicon transistors, these two companies were able to bring this invention into commercial use quickly: by the end of the 1960s the silicon chip had become the principal device in computer processors and was beginning to replace memory cores as well.

Besides co-inventing the integrated circuit, Noyce did something else that would shape the direction of computing. In 1968 he left Fairchild and co-founded a new company, called Intel, devoted to making memory chips as a replacement for magnetic cores. The Santa Clara Valley, on the peninsula south of San Francisco, was already a center for microelectronics. But Noyce’s founding of Intel raised that activity to a feverish pitch. In 1971 a journalist dubbed the region “Silicon Valley”: a name that implies not just the computer engineering that goes on there but also the free-wheeling, entrepreneurial culture that drives it (Ceruzzi 1998).

By the mid-1970s IBM’s dominance of computing worldwide was under assault from three directions. From Silicon Valley and the Boston suburbs came waves of small but increasingly capable systems. From the US Justice Department came an antitrust suit, filed in 1969, charging IBM with unfairly dominating the industry. From computer scientists doing software research came the notion of interactive use of computers by a procedure known as “time sharing,” which gave a number of users the illusion that the big, expensive computer was their own personal machine. Time sharing offered another avenue to get computing power into the hands of new groups of users, but the promise of a cheap “computer utility,” analogous to the electric power grid that supplied power to one’s home, did not materialize at that time.

An important component of this movement toward interactive computing was the development in 1964 of the BASIC programming language at Dartmouth College in New Hampshire, where students from liberal arts as well as science or engineering backgrounds found the computer more accessible than those at other colleges, who had to submit their programs as decks of punched cards, coded in less-friendly languages, and wait for the computer to come around to their place in the queue.

The personal computer

These assaults on the mainframe method of computing converged in 1975, when an obscure company from New Mexico offered the “Altair”—billed as the world’s first computer kit and selling for less than $400. This kit was just barely a “computer,” and one had to add a lot more equipment to get a practical system (Kidwell and Ceruzzi 1994). But the Altair’s announcement touched off an explosion of creative energy that by 1977 had produced systems that could do useful work. These systems used advanced silicon chips both for processing and memory; a floppy disk (invented at IBM) for mass storage; and the BASIC programming language to allow users to write their own applications software. This version of BASIC was written by a small group led by Bill Gates, who dropped out of Harvard and moved to New Mexico to develop software for the Altair. The net result was to topple IBM’s dominance of the computer industry. None of the giants doing battle with IBM did very well in the following decade either. Even Digital Equipment Corporation, in many ways the parent of the personal computer, faced near bankruptcy in the early 1990s.

The personal computer brought the cost of computing way down, but machines like the Altair were not suitable for anyone not well-versed in digital electronics and binary arithmetic. By 1977 several products appeared on the market that claimed to be as easy to install and use as any household appliance. The most influential of them was the Apple II. Apple’s founders, Steve Jobs and Steve Wozniak, were the Silicon Valley counterpart to Eckert and Mauchly: one a first-rate engineer, the other a visionary who saw the potential of the computer if made accessible to a mass market (Rose 1989). In 1979 a program called “Visicalc” appeared for the Apple II: it manipulated rows and columns of figures known to accountants as a “spread sheet,” only much faster and easier than anyone had imagined possible. A person owning Visicalc and an Apple II could now do things that even a large mainframe could not do easily. Finally, after decades of promise, software—the programs that get a computer to do what one wants it to do—came to the fore where it really belonged. A decade later it would be software companies, like Bill Gates’ Microsoft, that would dominate the news about computing’s advances.

Although it had a reputation as a slow-moving, bloated bureaucracy, IBM was quick to respond to Apple’s challenge, and brought out its “PC” in 1981. In a radical departure for IBM, but typical of minicomputers and other personal computers, the PC had an open architecture that encouraged other companies to supply software, peripheral equipment, and plug-in circuit cards. The IBM PC was more successful in the marketplace than anyone had imagined. The IBM name gave the machine respectability. It used an advanced processor from Intel that allowed it to access far more memory than its competitors. The operating system was supplied by Microsoft. A very capable spreadsheet program, Lotus 1-2-3, was offered for the PC and its compatible machines.

Apple competed with IBM in 1984 with its “Macintosh,” which brought advanced concepts of the so-called “user interface” out of the laboratories and into the popular consciousness. The metaphor of treating files on a screen as a series of overlapping windows, with the user accessing them by a pointer called a “mouse,” had been pioneered in military-sponsored labs in the 1960s. In the early 1970s had been further developed by a brilliant team of researchers at the Silicon Valley laboratory of the Xerox Corporation. But it remained for Apple to make that a commercial success; Microsoft followed with its own “Windows” operating system, introduced around the same time as the Macintosh but not a market success until 1990. For the next decade the personal computer field continued this battle between the Apple architecture and the one pioneered by IBM that used Intel processors and Microsoft system software.

The beginnings of networking

During the 1980s personal computers brought the topic of computing into the popular consciousness. Many individuals used them at work, and a few had them at home as well. The technology, though still somewhat baffling, was no longer mysterious. While personal computers dominated the popular press, the venerable mainframe computers continued to dominate the industry in terms of the dollar value of installed equipment and software. Mainframes could not compete with PC programs like spreadsheets and word processors, but any applications that required handling large amounts of data required mainframes. Beginning in the 1970s, these computers began to move away from punched cards and into interactive operations, using keyboards and terminals that superficially resembled a personal computer. Large, on-line database systems became common and gradually began to transform business and government activities in the industrialized world. Some of the more visible of these applications included airline reservations systems, customer information and billing systems for utilities and insurance companies, and computerized inventory and stocking programs for large retail. The combination of on-line database and billing systems, toll-free telephone numbers, and credit card verification and billing over the telephone transformed the once-humble mail order branch of retailing into a giant force in the American economy.

All of these activities required large and expensive mainframe computers, with software custom written at great expense for each customer. One was tempted to hook up an array of cheap personal computers running inexpensive software packages, but this was not feasible. Hitching another team of horses to a wagon might allow one to pull more weight, but the wagon will not go faster. Even that has its limits as it becomes increasingly difficult for the teamster to get the horses all to pull in the same direction. The problem with computing was similar and was expressed informally as “Grosch’s Law”: for a given amount of money, one gets more work out of one big computer than out of two smaller ones (Grosch 1991).

But that would change. At the Xerox Palo Alto Research Center in 1973, where so many advances in the user interface were made, a method of networking was invented that finally overturned this law. Its inventors called it “Ethernet,” after the medium that nineteenth-century physicists thought carried light. Ethernet made it practical to link smaller computers in an office or building to one another, thereby sharing mass memory, laser printers (another Xerox invention), and allowing computer users to send electronic mail to one another. At the same time as Ethernet was making local networking practical, an effort funded by the Defense Department’s Advanced Research Projects Agency (ARPA) was doing the same for linking computers that were geographically dispersed. ARPA was concerned with maintaining secure military communications in the event of war, when sections of a network might be destroyed. Early military networks descended from Project Whirlwind had central command centers, and as such were vulnerable to an attack on the network’s central control. These centers were housed in windowless, reinforced concrete structures, but if they were damaged the network was inoperable (Abbate 1999).

With funding from ARPA, a group of researchers developed an alternative, in which data was broken up into “packets,” each given the address of the computer to receive it, and sent out over a network. If one or more computers on the network were inoperable, the system would find an alternate route. The computer at the receiving end would re-assemble the packets into a faithful copy of the original transmission. By 1971 “ARPANET” consisted of 15 nodes across the country. It grew rapidly for the rest of that decade. Its original intent was to send large data sets or programs from one node to another, but soon after the network came into existence people began using it to send brief notes to one another. At first this was an awkward process, but in 1973 that was transformed by Ray Tomlinson, an engineer at the Cambridge, Massachusetts firm Bolt Beranek and Newman. Tomlinson came up with a simple notion of separating the name of a message’s recipient and that person’s computer with an “@” sign—one of the few non-alphabetic symbols available on the Teletype console that ARPANET used at the time. Thus was modern e-mail conceived, and with it, the symbol of the networked age.

The pressure to use ARPANET for general-purpose e-mail and other non-military uses was so great that it was split up. One part remained under military control. The other part was turned over to the US-funded, civilian National Science Foundation, which sponsored research not only to expand this network but also to allow interconnection among different types of networks (for example, networks that used radio instead of wires). Researchers began calling the result an “internet,” to reflect its heterogeneous nature. In 1983 the networks adopted a set of standards for data transmission, called “Transmission Control Protocol/Internet Protocol” (TCP/IP), with such interconnection. These protocols are still in use today and are the basis for the modern Internet (Aspray and Ceruzzi 2008).

These local and remote networking schemes fit well with other developments going on in computer hardware and software. A new type of computer emerged, called a “workstation,” which unlike the personal computer was better suited for networking. Another critical distinction was that they used an operating system called “UNIX,” which though difficult for consumers was well-suited to networking and other advanced programming. UNIX was developed at Bell Laboratories, the research arm of the US government-regulated telephone monopoly AT&T. Groups of workstations, linked locally by Ethernet to one another, and by the Internet to similar clusters world-wide, finally offered a real alternative to the large mainframe installation for many applications.

The Internet Age

The National Science Foundation, an agency of the US government, could not allow commercial use of the Internet that it controlled. It could, however, offer the use of the Internet protocols to anyone who wished to use them at little or no cost, in contrast to the networking protocols offered by computer companies like IBM. As Internet use grew, the NSF was under pressure to turn it over to commercial firms to manage it. A law passed by the US Congress in 1992 effectively ended the prohibition against commercial use, and one could say that with the passage of that law, the modern Internet Age began. That was not entirely true, as the US government retained control over the addressing scheme of the Internet—e.g. the suffixes “.com,” “.edu,” and so on, which allow computers to know where an electronic message is sent. By the turn of the twenty-first century, a number of countries asked that this control be turned over to the United Nations, but so far the US has resisted. The Internet is truly a resource offered freely to all countries of the world, but its master registry of domain names is managed by an American private company whose authority is given by the US Department of Commerce.

This political activity was complemented by dramatic advances in computer technology, which further led to the rapid spread of the Internet. By 1990 the expensive UNIX workstations had given way to personal computers that used advanced processors, especially a processor called the “Pentium,” supplied by Intel. On the software side, new versions of the Microsoft Windows operating system came with the Internet protocols and other networking software installed. This combination gave PCs the equivalent power of the workstation. UNIX is rarely found on the PC, although the more powerful servers and so-called “routers” that perform the basic switching for the Internet continue to use it. A variant of UNIX called “Linux,” developed in 1991 by Linus Torvalds in Finland, was offered as a free or low-cost alternative to the Microsoft Windows system. It and related software gained a small but significant market share. These came to be called “open source” software, defined as “free” but not without restrictions (Williams 2002).

While this activity was going on at government and university laboratories, personal computer users were independently discovering the benefits of networking. The first personal computers like the Apple II did not have much ability to be networked, but resourceful hobbyists developed ingenious ways to communicate anyway. They used a device called a “modem” (modulator-demodulator) to transmit computer data slowly as audio tones over ordinary telephone lines. In this they were helped by a ruling by the US telephone monopoly, that data sent over a telephone line was not treated any differently than voice calls. Local calls were effectively free in the United States, but long-distance calls were expensive. Personal computer enthusiasts worked out ways of gathering messages locally, and then sending them across the country to one another at night, when rates were lower (the result was called “FidoNet,” named after a dog that “fetched” data). Commercial companies arose that served this market as well; they rented local telephone numbers in most metropolitan areas, and charged users a fee for connecting to them. One of the most influential of these was called “The Source,” founded in 1979; after some financial difficulties it was reorganized and became the basis for America Online, the most popular personal networking service from the late 1980s through the 1990s.

These personal and commercial systems are significant because they introduced a social dimension to networking. ARPANET was a military network. Its descendents frowned on frivolous or commercial use. But the personal networks, like the house telephone over which their messages ran, were used for chats, freewheeling discussions, news, and commercial services right from the start. One of the commercial networks, Prodigy, also incorporated color graphics—another staple of today’s Internet. The histories of the Internet that concentrate on ARPANET are correct: ARPANET was the technical ancestor of the Internet, and the Internet protocols emerged from ARPA research. But a full history of the Internet must include the social and cultural dimension as well, and that emerged from Prodigy, AOL, and the community of hobbyists.

By the late 1980s it was clear that computer networks were desirable for both the home and the office. But the “Internet,” the network that was being built with National Science Foundation support, was only one of many possible contenders. Business reports from those years were championing a completely different sort of network, namely the expansion of cable television into a host of new channels—up to 500, according to one popular prediction. The reconfigured television would also allow some degree of interactivity, but it would not be through a general-purpose, personal computer. This concept was a natural outgrowth of the marketing aims of the television and entertainment industry. Among the scientists and computer professionals, networking would come in the form of a well-structured set of protocols called “Open Systems Interconnection” (OSI), which would replace the more freewheeling Internet. None of this happened, largely because the Internet, unlike the competing schemes, was designed to allow disparate networks access, and it was not tied to a particular government-regulated monopoly, private corporation, or industry. By the mid-1990s private networks like AOL established connections to the Internet, and the OSI protocols fell into disuse. Ironically, it was precisely because the Internet was available for free and without any specific commercial uses in mind, that allowed it to become the basis for so much commercial activity once it was released from US government control after 1993 (Aspray and Ceruzzi 2008).

In the summer of 1991, researchers at the European particle physics laboratory CERN released a program called the World Wide Web. It was a set of protocols that ran on top of the Internet protocols, and allowed a very flexible and general-purpose access to material stored on the Internet in a variety of formats. As with the Internet itself, it was this feature of access across formats, machines, operating systems, and standards that allowed the Web to become popular so rapidly. Today most consumers consider the Web and the Internet to be synonymous; it is more accurate to say that the later was the foundation for the former. The primary author of the Web software was Tim Berners-Lee, who was working at CERN at the time. He recalled that his inspiration for developing the software came from observing physicists from all over the world meeting together for scientific discussions in common areas at the CERN buildings. In addition to developing the Web, Berners-Lee also developed a program that allowed easy access to the software from a personal computer. This program, called a “browser,” was a further key ingredient in making the Internet available to the masses (Berners-Lee 1999). Berners-Lee’s browser saw only limited use; it was soon replaced by a more sophisticated browser called “Mosaic,” developed in 1993 at the University of Illinois in the United States. Two years later the principal developers of Mosaic left Illinois and moved to Silicon Valley in California, where they founded a company called Netscape. Their browser, called “Navigator,” was offered free to individuals to download; commercial users had to pay. Netscape’s almost instant success led to the beginning of the Internet “bubble” whereby any stock remotely connected to the Web was traded at absurdly high prices. Mosaic faded away, but Microsoft purchased rights to it, and that became the bases for Microsoft’s own browser, Internet Explorer, which today is the most popular means of access to the Web and to the Internet in general (Clark 1999).


The history of computing began in a slow orderly fashion, and then careened out of control with the advent of networking, browsers, and now portable devices. Any narrative that attempts to chart its recent trajectory is doomed to failure. The driving force for this is Moore’s Law: an observation made by Gordon Moore, one of the founders of Intel, that silicon chip memory doubles in capacity about every 18 months (Moore 1965). It has been doing this since the 1960s, and despite regular predictions that it will soon come to an end, it seems to be still in force. The capacities of mass storage, especially magnetic disks, and the bandwidth of telecommunications cables and other channels have been increasing at exponential rates as well. This puts engineers on a treadmill from which there is no escape: when asked to design a consumer or commercial product, they design it not with the capabilities of existing chips in mind, but with what they anticipate will be the chip power at the time the product is brought to the market. That in turn forces the chip makers to come up with a chip that meets this expectation. One can always find predictions in the popular and trade press that this treadmill has to stop some day: at least when the limits of quantum physics make it impossible to design chips that have greater density. But in spite of these regular predictions that Moore’s Law will come to an end, it has not. And as long as it holds, it is impossible to predict a “trajectory” for computing for even the next year. But that does make this era one of the most exciting to be living in, as long as one can cope with the rapidity of technological change.


Abbate, J. Inventing the Internet. Cambridge, Massachusetts: MIT Press, 1999.

Aspray, W., ed. Computing Before Computers.

Ames, Iowa: Iowa State University Press, 1990.

—, and P. E. Ceruzzi, eds. The Internet and American Business. Cambridge, Massachusetts, 2008.

Berners-Lee, T. and M. Fischetti. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by its Inventor. San Francisco: Harper, 1999.

Bird, P. LEO: The First Business Computer. Berkshire, United Kingdom: Hasler Publishing, 1994.

Burks, A. R. and W. Arthur. The First Electronic Computer: The Atanasoff Story. Ann Arbor,

Michigan: University of Michigan Press, 1988.

Ceruzzi, P. E. Reckoners: the Prehistory of the Digital Computer, From Relays to the Stored Program Concept, 1935-1945. Westport, Connecticut: Greenwood Press, 1983.

—, A History of Modern Computing. Cambridge, Massachusetts: MIT Press, 1998.

Clark, J. and O. Edwards. Netscape Time: The Making of the Billion-Dollar Start-Up that Took on Microsoft. New York: St. Martin’s Press, 1999.

Eames, Ch. and R. Offices of. A Computer Perspective: Background to the Computer Age. Cambridge, Massachusetts: Harvard University Press, 1990.

Eckert, W. J. Punched Card Methods in Scientific

Calculation. New York: IBM Corporation, 1940.

Grosch, H. R. J. Computer: Bit Slices from a Life. Novato, California: Third Millennium Books, 1991.

Kidwell, P. A., and P. E. Ceruzzi. Landmarks in Digital Computing: A Smithsonian Pictorial History. Washington, D. C.: Smithsonian Institution Press, 1994.

Merzbach, U. Georg Scheutz and the First Printing Calculator. Washington, D. C.: Smithsonian Institution Press, 1977.

Moore, G. E. «Cramming More Components onto Integrated Circuits», Electronics, April 19, 1965, 114-117.

Randall, B., ed. The Origins of Digital Computers: Selected Papers. Berlin, Heidelberg and New York: Springer-Verlag, 1975.

Redmond, K. C. and Th. M. Smith. Project Whirlwind: The History of a Pioneer Computer. Bedford, Massachusetts: Digital Press, 1980.

Rose, F. West of Eden: The End of Innocence at Apple Computer. New York: Penguin Books, 1989.

Stern, N. From ENIAC to UNIVAC: An Appraisal of the Eckert-Mauchly Computers. Bedford, Massachusetts: Digital Press, 1981.

Williams, S. Free as in Freedom: Richard Stallman’s Crusade for Free Software. Sebastopol, California: O’Reilly, 2002.

Quote this content

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved