Created by Materia for OpenMind Recommended by Materia
23
Estimated reading time Time 23 to read

The Soviet Union’s successful launch of two Sputnik satellites in the fall of 1957 came as a shock to many Americans. Although the US intelligence community was not surprised, ordinary Americans were, and the two launches demonstrated without any doubt that the Soviet Union had a lead over the US not only in satellites, but in booster rockets, which could deliver weapons as well. Among the responses to Sputnik was the founding of agencies, one an arm of the US Defense Department, the other a civilian agency. One was the (Defense) “Advanced Research Projects Agency,” or “ARPA,” more recently known as “DARPA.” ARPA’s mission was plain: support long-term research that will make it unlikely that the US would ever again to be caught off guard as it was when the Sputniks were lau ched. One of ARPA’s research areas was in missiles and space exploration; by the end of 1958 most of that work was transferred to another agency, under civilian control: the National Air and Space Administration (NASA). Both were in 1958 (Norberg and O’Neil 1996).

In the fifty years since their founding, one can list a remarkable number of achievements by each, but chief among those achievements are two. Beginning in the mid-1960s, DARPA designed and build a network of computers, known as ARPANET, which was the technical inspiration for today’s Internet. And NASA, responding to a challenge by President John F. Kennedy in 1961, successfully landed a dozen astronauts on the Moon and retuned them safely to Earth between 1969 and 1972.

In the mid-1990s, the Internet moved rapidly from a network known only to computer scientists or other specialists, to something that was used by ordinary citizens across the industrialized world. In the US, the non-profit Public Broadcasting Service produced a multi-part television program to document the meteoric rise of this phenomenon. It was given the whimsical title “Nerds 2.0.1: A Brief History of the Internet” (Segaller 1998). The title suggested that the Internet was a creation of “nerds”: mostly young men, few of them over thirty years old, whose obsessive tinkering with computers led to this world-changing social phenomenon. In nearly every episode of the television program, the narrator noted the contrast between the accomplishments of the two agencies founded at the same time: the Internet as a descendant of ARPA’s work, the manned landings on the Moon the result of NASA’s.

The body of the program elaborated further on this theme. The program—correctly—noted that the Internet descended from the ARPANET, a computer network designed for, and sponsored by the US military. The show went a step further: it argued that the Moon landings were a one-time stunt, with little or no long-term impact on society, while the Internet was a world-changing technology that did, and continues, to affect the lives or ordinary people around the world.

A half-century after the founding of those two agencies, we can revisit the relative achievements in computing and space exploration, and ask about the relationship those two technologies have had with each other. In both aerospace and computing, there has been tremendous progress, but the future did not turn out at all the way people thought it would.

In the late 1960s, many influential computer scientists predicted that computers would attain “Artificial Intelligence” (AI), and become our personal servants, perhaps even companions (McCorduck 1979). Science fiction writers embraced this theme and portrayed AI-enabled computers either as our beneficial servants, as found in the robots in the Star Wars movie series, or to our detriment, as found in the malevolent computer “HAL” in the movie 2001: A Space Odyssey. But in spite of this recurring theme, that did not happen. Artificial Intelligence remains an elusive goal. However, outside of the narrow confines of the AI community of computer scientists, this “failure” does not bother anyone. The reason is simple: the advent of the personal computer, the Internet, the wireless telephone, and other advances have brought computing technology to the world at levels that surpass what most had envisioned at the time of the Moon landings. We cannot converse with them as we would another person, but these systems exhibit a surprising amount of what one may call “intelligence,” more from their brute-force application of processing power and memory than from their inherent design as artificial substitutes for the human brain.

In the realm of space exploration, the Apollo missions to the Moon generated predictions that also failed to come to pass: permanent outposts on the Moon, tourist hotels in Earth orbit, manned missions to Mars. None of these have happened yet, but advances in space technology have been remarkable. The Earth is now encircled by communications and weather satellites that are integrated into our daily lives. The Global Positioning System (GPS), and the planned European and Asian counterparts to it, provide precise timing and location services at low cost to the world. Robotic space probes have begun an exploration of Mars and the outer planets that rival the voyages of any previous age of exploration. Space telescopes operating in the visible and other wavelengths have ushered in a new era of science that is as exciting as any in history (Dick and Launius 2007).

In the realm of computing, the advances in sheer memory capacity and processing power, plus networking, have more than covered any frustrations over the failure of computers to acquire human-like intelligence. In the realm of space exploration, the advances described above have not erased the frustration at not achieving a significant human presence off our planet. (In the related realm of aircraft that fly within the Earth’s atmosphere, recent decades have likewise seen frustrations. Aircraft broke through the sound barrier in the late 1940s, but outside of a few specialized military systems, most aircraft today fly below the speed of sound. Commercial jetliners fly at about the same speed, and about the same altitude, as the first commercial jets that were introduced into service in the 1950s. The supersonic Concorde, though a technical marvel, was a commercial failure and was withdrawn from service.)

Hence the thesis of that television program: that the little-noticed computer network from ARPA overwhelms the more visible aeronautics and space achievements of NASA. Many viewers apparently agreed, regardless of whatever counter arguments NASA or other space enthusiasts raised against it.

For the past sixty years, computing and aerospace have been deeply interconnected, and it is hardly possible to treat the history of each separately. The invention of the electronic digital computer, which occurred in several places between about 1940 and 1950, was often connected to the solution of problems in the sciences of astronomy and aerodynamics, or in support of the technologies of aircraft design and production, air traffic control, anti-aircraft weapons, and later guided missile development. One of the inspirations for the development of ARPANET was the need to adapt communications networks to the crisis of control brought about by the development of ballistic missiles and jet-powered bombers. It was not simply a matter of designing a network that could survive a nuclear attack, as many popular histories assert; it was also a need to have a communications system that cold be as flexible and robust in keeping with the new military environment of aerospace after World War II (Abbate 1999).

After 1945, the US aerospace community had the further attribute of commanding large sums of money from the military arm of its government, as the US waged a Cold War with the Soviet Union. That pushed the development of digital computing much faster in the US than it progressed in England, the home of the first code-breaking computers, the first stored-program computers, and the first commercial computer. Some of that money was wasted, but US military support, mainly although not exclusively to support aerospace, was a powerful driver of the technology.

By its nature, a digital computer is a general-purpose device. If one can write a suitable program for it—admittedly a significant condition—then one can use a computer to serve a variety of ends. This quality, first described in theoretical terms by the English mathematician Alan Turing in the 1930s, set the computer apart from other machines, which are typically designed and optimized for one, and only one, function. Thus aerospace was but one of many places where computers found applications. The decade of the 1950s saw a steady increase in the power and memory capacity of mainframe computers, coupled with a development of general purpose software such as the programming language FORTRAN, and special-purpose software that was used for computer-aided design/computer-assisted manufacturing (CAD/CAM), stress analysis, or fluid dynamics.

Unlike computer applications in, say, banking or finance, aerospace applications have an additional constraint. Until about 1960, computers were large, fragile, and consumed large amounts of power. That restricted their applications in aerospace to the ground—to airline reservations, wind-tunnel analysis, CAD/CAM, and the like. For aerospace, the computer’s potential to become a universal machine as implied by Turing’s thesis was thwarted by the hard reality of the need to adapt to the rigors of air and space flight. The aerospace and defense community, which in the 1950s in the US had vast financial resources available to it, was therefore in a position to shape the direction of computing in its most formative years. In turn, as computing addressed issues of reliability, size, weight, and ruggedness, it influenced aerospace as well during a decade of rapid change in flight technology (Ceruzzi 1989).

The transistor, invented in the late 1940s, was the first technological advance to address the issues of reliability, size, and weight. It took a long period of development, however, before the silicon transistor became reliable enough to allow computers to become small, rugged, and less power consuming. Transistorized computers began to appear in missile-guidance systems around 1960. In 1959 two engineers, Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Instruments, went a step further and developed circuits that placed several transistors and other components on a single chip of material (at first germanium, later silicon). The integrated circuit, or silicon chip, was born. Neither Noyce nor Kilby was working on an aerospace application at the time. But aerospace needs provided the context for the chip’s invention. In the dozen years between the invention of the transistor and the silicon chip, the US Air Force mounted a campaign to improve the reliability of electronic circuits in general. The Air Force was at the time developing ballistic missiles: million-dollar weapons that would sometimes explode on the launch pad because of the failure of an electronic component that may have cost less than one dollar. The electronic industry of the 1950s based its economic models on a consumer market, where low manufacturing costs, not high quality, were the way to achieve profits. Consumers at the time simply accepted the occasional failure of components, much as today they accept personal computer software that occasionally “crashes” (Ceruzzi 1998, 177-206).

For aerospace applications, this model had to be abandoned. Computer crashes were not metaphorical but real. The Air Force’s “High-Reliability” program of the late 1950s accomplished that goal. Manufacturers developed statistical quality control techniques; every step in a manufacturing process was rigorously documented. Devices were assembled in “clean rooms” (invented at a US weapons laboratory in New Mexico): more sterile than the finest hospital operating room. In them, workers wore suits that prevented hair or skin flakes from contaminating the assemblies, and filters screened out the tiniest particles of dust. Also during the 1950s, chemists developed ways of producing ultra-pure crystalline silicon, into which they could introduce very small and precise quantities of other elements to yield a material with the desired electronic properties (a process called “doping”). Much of this activity took place in what was once an agricultural valley south of San Francisco, soon dubbed “Silicon Valley” by a local journalist. The Fairchild Semiconductor Company, where Robert Noyce worked, was at the center of this creative activity. There, in addition to developing the silicon handling techniques mentioned above, engineers also developed a method of manufacturing transistors by photographic etching. All these advances took place before the Integrated Circuit was invented, but without them, what followed could not have happened.

The integrated circuit placed more than one device on a piece of material. At first the number of circuits on a chip was small, about five or six. But that number began to double, at first doubling every year, then at a doubling rate of about every 18 months. That doubling rate has remained in force ever since. It was christened “Moore’s Law,” by Gordon Moore, a colleague of Robert Noyce’s at Fairchild, who was responsible for laying much of the material foundation for the chip’s advances (Moore 1965). That law—really an empirical observation—has driven the computer industry ever since, and with it the symbiotic relationship with aerospace. In this context, it is not surprising that the first contract for large quantities of chips was for the US Air Force’s Minuteman ballistic missile program, for a model of that missile that first flew in 1964. Following closely on the Minuteman contract was a contract for the computer that guided Apollo astronauts to the Moon and back, in a series of crewed missions that began in 1968 (Ceruzzi 1998, 182). By the time of the Apollo missions, Moore’s Law was beginning to have a significant impact on aerospace engineering and elsewhere. The last Apollo mission, an Earth-orbit rendezvous with a Soviet Soyuz capsule, flew in 1975. Onboard was a pocket calculator made by the Silicon Valley firm Hewlett-Packard. That hand-held calculator had more computing power than the onboard Apollo Guidance Computer, designed a decade earlier when the chip was new. One could find numerous examples of similar effects.

The spectacular advances in robotic deep-space missions, and other accomplishments mentioned above, are largely a result of the effect of Moore’s Law on spacecraft design—especially spacecraft that do not carry humans (who, for better or worse, have the same physical dimensions and need for food, water, and oxygen today as we had in 1959, when the silicon chip was invented). The direct comparison of the ARPANET with Project Apollo misses the nuances of this story. One of the ironies of history is that advances in space exploration have had an effect on aircraft design as well. The Apollo Lunar Module—the gangly craft that took two astronauts the final 100 kilometers from Lunar orbit to the Moon’s surface—had to have computer control, as no human being could manage the delicacy of a lunar landing in the absence of an atmosphere, and ground controllers in Houston were too far away to be of help (Mindell 2008). At the end of the Apollo program, Apollo guidance computers were removed from spacecraft and installed in an experimental NASA aircraft, to see if aircraft could benefit from this technology as well. It was no coincidence that NASA choose as the manager of this program none other than Neil Armstrong, the first person to walk on the Moon in 1969, and thus one of the first whose life depended intimately on the correct operation of a digital computer (Tomayko 2000).

The NASA tests were successful, but American aircraft companies were slow to adopt the new technology. The European consortium Airbus, however, embraced it, beginning in the late 1980s with the Airbus A-320. Aircraft do not require, as the Lunar Module did, such “fly-by-wire” controls, but by using a computer, the A-320 had better comfort and better fuel economy than competing aircraft from American suppliers Boeing and McDonnell-Douglas. Fly-by-wire, along with “glass cockpits” (instrument panels that use computer displays) are now commonplace among all new commercial, military, and general aviation aircraft. The Space Shuttle, too, uses fly-by-wire controls in its design, as without such controls it would be impractical to have a human pilot fly it to an unpowered, precise landing on a runway after entering the atmosphere at over 27,000 kilometers per hour.

Another direct influence of the Air Force and NASA on computing was the development of Computer Aided Design (CAD). Air Force funding supported an effort at the Massachusetts Institute of Technology (MIT) that led to the control of machine tools by a sequence of digital computer controls, coded as holes punched into a strip of plastic tape. The results of this work transformed machine tooling, not just for aerospace, but for metalworking in general. At the same time, NASA engineers, working at the various centers, had been using computers to assist in stress analysis of rockets and spacecraft. Launch vehicles had to be strong enough to hold the fuel and oxygen, as well as support the structure of the upper stages, while enduring the vibration and stress of launch, and they had to be light-weight. Aircraft engineers had grappled with this problem of stress analysis for decades; in a typical aircraft company, for every aerodynamicist on the payroll there might have been ten engineers involved with stress analysis. Their job was to ensure that the craft was strong enough to survive a flight, yet light enough to get off the ground. NASA funded computer research in this area, and among the results was a generalized stress analysis program called “NASTRAN”—an shortening of “NASA Structural Analysis” and based on the already-popular FORTRAN programming language. It has since become a standard throughout the aerospace industry.

One obvious issue that arose in the transfer of fly-by-wire to commercial aircraft from Project Apollo was the issue of reliability, already mentioned. The invention of the silicon chip, combined with the Air Force’s High-Reliability initiatives, went a long way in making computers reliable for aerospace use, but reliability was still an issue. If the Apollo computers failed in flight, the astronauts could be guide home by an army of ground controllers in Houston. No Apollo computer ever failed, but during the Apollo 13 mission in 1970, the spacecraft lost most of its electrical power, and the crew was indeed saved by ground controllers (Kranz 2000). During the first moon landing—Apollo 11 in 1969—the crew encountered a software error as they descended to the surface; this was resolved by ground controllers, who advised the crew to go ahead with a landing. Having a battery of ground controllers on call for every commercial flight is obviously not practical. Likewise the Space Shuttle, intended to provide routine access to space, was designed differently. For the A-320, Airbus devised a system of three, identical computers, which “vote” on every action. An in-flight failure of one computer would be outvoted by the other two, and the craft can land safely. The Shuttle has five—the failure of one Shuttle computer would allow the mission to continue. The fifth computer is there in case of a software error—it is programmed by a different group of people, so there is little chance of all five computers having a common “bug” in their software (Tomayko 1987, 85–133). This type of redundancy has become the norm in aircraft design. Many spacecraft adopt it, too, but in more nuanced ways, especially if the craft is not carrying a human crew.

Whereas the onboard computing capabilities of commercial aircraft have transformed the passenger jet, the situation on the ground has not progressed far beyond the vacuum-tube age. Commercial air traffic is very safe, and its safety depends on air traffic controllers directing traffic through virtual highways in the sky. Because the US was a pioneer in this activity, it accumulated a large investment in a technology that relies on relatively old-fashioned mainframe computers on the ground, with communications to and from the pilots via VHF radio operating in classical AM voice mode—likewise old fashioned technology. The advent of the Global Positioning System (GPS)—as good an example of the power of Moore’s Law as any—should allow air traffic controllers to dispense with much of this infrastructure, and replace it with onboard information sent directly to pilots from satellites. In other words, rather than have ground controllers keep track of the location and route of a plane, the pilots themselves will do that, in a method that does not compromise safety yet increases the capacity of the airways. The pilots would obtain information about their location, and the location of any potentially interfering traffic, using onboard computers that process data from the constellation of GPS or other navigation satellites, plus other satellites and a few select ground stations. That is beginning to happen, but the continental US may be the last to fully adopt it.

If there is a common theme among these stories, it is that of how best to utilize the capabilities of the human versus the capabilities of the computer, whether on the ground, in the air, or in space. That issue is never settled, as it is affected by the increasing sophistication and miniaturization of computers, which obviously imply that the craft itself can take on duties that previously required humans. But it is not that simple. Ground-based computers are getting better, too. Human beings today may have the same physical limits and needs as the Apollo astronauts, but they have a much more sophisticated knowledge of the nature of space flight and its needs.

The needs of Aerospace computing

At this point it is worthwhile to step back and examine some specific aspects of space flight, and how “computing,” broadly defined, is connected to it.

The Wright brothers’ patent for their 1903 airplane was for a method of control, not lift, structure, or propulsion. Spacecraft face a similar need. For spacecraft and guided missiles, control is as important as rocket propulsion. Guided missiles are controlled like airplanes, although without a human pilot. Spacecraft face a different environment, and their control needs are different. An aircraft or guided missile must operate its engines constantly, to work against atmospheric drag, while the forward motion of the wings through the air generates lift to counter the force of gravity. A rocket, by contrast, counters the force of gravity not by lift but by the direct application of thrust. And once a spacecraft enters space, there is little or no atmospheric drag. At that point, its rocket engines are shut off. Thus for many space missions, the rocket motors are active for only a fraction of the total mission time. A spacecraft still requires control, however, but in a different way depending on the phase of its mission. During the initial phase of powered flight, which may last only a few minutes or less, the critical issue is to align the thrust vector of the rocket against the launch vehicle’s center of gravity. The configuration of most rockets, with their engines at the bottom and the fuel tanks and payload above, is unstable. The vehicle “wants” to topple over and will do so in an instant, unless its thrust is actively and constantly guided as it ascends. Once that first-order stability is achieved, the vehicle’s guidance system may direct the thrust to deviate from that alignment—at first slightly, then more and more as it gains velocity. That will cause the vehicle to tilt, eventually to an optimum angle where the rocket’s thrust not only counters gravity but also propels it horizontally: to achieve orbit, to return to Earth some distance away, or to escape the Earth entirely.

Controlling a rocket’s thrust in this, the powered phase of a mission, we call “guidance,” although the aerospace community does not always agree on the definition of this term. Note also that this form of guidance is also required for nearly the whole trajectory of an air-breathing guided missile, which is powered through most of its flight.

Once a spacecraft reaches its desired velocity, it may coast to its destination on a “ballistic” trajectory, so-called because its path resembles that of a thrown rock. This assumes that the desired velocity was correct at the moment the engines were cut off. If not, either the main engines or other auxiliary engines are used to change the craft’s trajectory. This operation is typically called “navigation,” although once again it is not strictly defined. Again in contrast to ships at sea or aircraft on long-distance missions, a spacecraft may fire its onboard rockets only occasionally, not continuously (ion and electric propulsion systems are an exception to this rule). But the process is the same: determine whether one is on a desired course, and if not, fire the onboard engines to change the velocity as needed.

Finally, a craft operating in the vacuum of space feels no atmospheric forces. Once the rocket motors have shut down, it is free to orient itself in any direction and will fly the same no matter how it is pointed. In practice a mission requires that a craft orient itself in a specific way: to point its solar panels at the Sun, to point a camera to a spot on Earth, to aim an antenna, etc. The process of orienting a spacecraft along its x, y, and z axes in space we will call the “control” function. Spacecraft achieve control by using rocket motors with very small thrust, by magnetic coils, momentum wheels, gravity-gradient, or other more exotic devices. The term “control” also encompasses operational aspects of a space mission, such as turning on a camera, activating an instrument, preparing a vehicle for capture by another planet, etc. These actions can be done automatically, by crew members onboard, or from “mission control” stations on the ground.

The Wright brothers’ aircraft was unstable by design and required constant attention from its pilot. Moving the horizontal stabilizer to the rear of an airplane provided greater stability; just as tail feathers stabilize an arrow. But controlled aeronautical flight was still difficult. To assist a pilot in maintaining control, in the early twentieth century the American inventor, Elmer Sperry, devised a system of gyroscopes, which augmented the inherent stability of the airplane and reduced the workload on the pilot. This combination of aft-placement of aerodynamic control surfaces, plus a self-correcting system based on gyroscopes, was carried over into rocket research and development. Elmer Sperry’s original insight, much extended, is still found at the heart of modern rocket guidance systems. Of those extensions, one was especially important for rocket guidance and came from the German V-2 program: the design of a “pendulous” gyro to measure the time integral of acceleration, which (by Newton’s calculus) indicates the craft’s velocity (MacKenzie 2000).

During the powered phase of flight, guidance must be performed at speeds commensurate with the action of the rocket. That precludes any off-line work done by humans stationed at the launch point, other than simple decisions such as to destroy a rocket that is going off course. Control functions may also be performed by onboard systems, but if there is no urgency to orient a craft, that can be done by commands from the ground. Navigation often can proceed at a slower pace, with time to process radar or telemetry data through powerful mainframe computers, which can then radio up commands as needed. Thus, while guidance is typically performed by onboard gyroscopes and accelerometers operating with no external communication in either direction, navigation and control may combine signals from onboard systems with radio signals to and from ground stations. Some early ballistic missiles were also guided by radio from the ground, although at real-time speeds with no direct human input at launch. This form of radio or beam-riding guidance has fallen from favor.

Translating the signals from an integrating gyro or accelerometer required what we now call “computing.” Early systems used electro-mechanical systems of gears and relays. These were analog computers, using a design that was a mirror (or analog) of the flight conditions it was to control. The V-2, for example, used a pendulous gyro to compute the integral of acceleration, thus giving the velocity; at a certain velocity the motor was shut off to hit a predetermined target. The early mechanical or pneumatic devices were later replaced by electronic systems, using vacuum tubes. Vacuum tubes, though fast acting, remained inherently fragile and unreliable, and were only used in a few instances.

Electronic systems became practical with the advent of solid-state devices, beginning with the invention of the transistor and then the Integrated Circuit, as described above. These circuits were not only small and rugged, they also made it possible to design digital, rather than analog, controls and thus take advantage of the digital computer’s greater flexibility. Digital technology has completely taken over not only rocketry and space vehicles but also all new guided missiles, as well as commercial and military aircraft. Although properly heralded as a “revolution,” the change was slow to happen, with digital controls first appearing only in the mid-1960s with systems like the Gemini onboard computer.

Long before that, however, the digital computer had an impact on flight from the ground. The V-2 operated too rapidly to be controlled—or tracked and intercepted—by a human being during flight. New jet aircraft were not quite as fast but still challenged the ability of humans to control them. Beginning around 1950, it was recognized that the electronic digital computer, located on the ground where its size and weight were of less concern, could address this problem. Project Whirlwind at MIT successfully tracked and directed an Air Force plane to intercept another aircraft over Cape Cod in April 1951. Whirlwind led to SAGE, an acronym for “Semi-Automatic Ground Environment.” SAGE was a massive system of radars, computers, and communications links that warned the US of any flights of Soviet bombers over the North Pole. Critics have charged that SAGE was obsolete by the time it was completed, as the ballistic missile replaced the bomber as a method of delivering a weapon. SAGE could not defend against ballistic missiles, but the system was the inspiration for many ground-control systems, including those used today by the US Federal Aviation Administration to manage commercial air traffic (Ceruzzi 1989).

By the 1960s, space operations were tightly bound to the ground. The initial design of Project Mercury, for example, had the astronaut simply along for the ride, with ground stations scattered across the globe doing all the mission control. The first Project Mercury capsules did not even have a window. From that beginning, manned spacecraft gradually acquired more onboard control and autonomy, but no crewed spacecraft to this day is ever allowed to operate without inputs from mission controllers on Earth. The rescue of the Apollo 13 crew in 1970 drove home the importance of ground control. Today, most space operations, from the piloted Shuttle and Space Station, to commercial communications satellites, to unmanned military and scientific spacecraft, require more ground-control facilities than commercial or military aviation.

SAGE was designed to look for enemy aircraft. A decade later the US began the development of BMEWS (Ballistic Missile Early Warning System), to provide a warning of ballistic missiles. Air defenses for the continent were consolidated in a facility, called NORAD, at Colorado Springs, Colorado, where computers and human beings continuously monitor the skies and near-space environment. Defense against ballistic missiles continues to be an elusive goal. At present these efforts are subsumed under the term National Missile Defense, which has developed some prototype hardware. A few systems designed to intercept short-range missiles have been deployed at a few sites around the world. Computers play a crucial role in these efforts: to detect launches of a missile, to track its trajectory, to separate legitimate targets from decoys, and to guide an interceptor. These activities require enormous computational power; they also require very high computation speeds as well. Missile defense pushes the state of the art of computing in ways hardly recognized by consumers, however impressive are their portable phones, laptops, and portable media players.

Similarly elaborate and expensive control systems were built for reconnaissance and signals-intelligence satellites. Although the details of these systems are classified, we can say that many US military systems tend to be controlled from ground facilities located near Colorado Springs, Colorado; human spaceflight from Houston, Texas; and commercial systems from various other places in the country. All may be legitimately called descendants of Project Whirlwind.

One final point needs to be made regarding the nature of ground versus onboard spacecraft control. SAGE stood for “Semi-Automatic Ground Environment.” The prefix “semi” was inserted to make it clear that human beings were very much “in the loop”—no computer system would automatically start a war without human intervention. An inertial guidance system like that used on the Minuteman is completely automatic once launched, but prior to launch there are multiple decision points for human intervention. Likewise in the human space program, the initial plans to have spacecraft totally controlled from the ground were not adopted. Project Mercury’s initial designs were modified, first under pressure from the astronauts, and later more so after the initial flights showed that it was foolish to have the astronaut play only a passive role. A desire for human input is also seen in the controls for the Space Shuttle, which cannot be operated without a human pilot.

The Future

It should be clear from the above discussion that a simple comparison of the advances in computing and advances in space travel since 1958 is not possible. Nevertheless, the producers of the television show “Nerds 2.0.1” made a valid point. The Internet has enjoyed a rapid diffusion into society that aerospace has not been able to match. A factor not mentioned in that show, but which may be relevant, is an observation made by networking pioneer Robert Metcalfe. According to Metcalfe (and promoted by him as “Metcalfe’s Law” as a counterpart to Moore’s Law), the value of a network increases as the square of the number of people connected to it. Thus the Internet, which adds new connections every day, increases in value much faster than the cost of making each of those new connections. Space exploration has no corresponding law, although if deep space probes discover evidence of life on other planets, that equation will be rewritten.

One facet of history that is forgotten when writing about the Internet is that the aerospace community was among the world’s pioneers in computer networking, but to different ends. The SAGE system was the world’s first large-scale computer network, for example. And the first use of a computer network for private, as opposed to military or government use, was the airline reservations system “SABRE,” developed by IBM in the early 1960s for American Airlines. These networks were significant but were not the technical antecedents of the Internet. In fact, the ARPANET was developed in partial response to the deficiencies of SAGE. In the latter system, the entire network could have been rendered inoperative if a central control node were destroyed; with the Internet that cannot happen as it has no central control point, by design. The Internet’s ability to link disparate computer systems by a set of common protocols likewise sets it apart from aerospace networks, which often are unable to communicate with one another. An embarrassing example of this happened recently during the development by Airbus of its superjumbo transport, the Airbus A-380. Airbus made heavy use of a CAD program called “CATIA,” developed by the French aerospace company Dassault Systemes. CATIA allowed engineers from different laboratories and plants to work to a common set of virtual “drawings,” as if they were in the same building. For the A-380, one group of designers was using a different version of CATIA to the others, and when the parts were brought together for final assembly at the Airbus plant in Toulouse, France, they did not fit. Boeing has likewise experienced problems integrating assemblies from different places for its new jet, the 787 Dreamliner. In fairness to Airbus and Boeing, the Internet, as it is presently configured, would be unable to handle the complexities of designing a modern airplane, in spite of its ability to scale up to large numbers of nodes from all over the world.

Was NASA’s Project Apollo a technological dead-end, however impressive an engineering accomplishment it was? And was the network developed by NASA’s companion agency, ARPA, the true defining technology of the modern age? Neither question admits of an easy answer. The two technologies have grown in a symbiotic relationship with each other, and they will continue to do so in the future. The notion of a computer as an artificially-intelligent agent in service to humanity has given way to a notion of the computer as a device to “augment human intellect,” in the worlds of computer pioneer Douglas Engelbart. Engelbart is best known for his invention of the mouse as a computer pointing device, but he is also known as one of the first to recognize this place for computers among us. Before inventing the mouse, Engelbart worked at the NASA Ames Research Center in Mountain View, California, and later for a commercial computer networking company owned by the aerospace company McDonnell-Douglas. So he was no stranger to the real-world limitations, and potential, of networked computing, and to aerospace applications.

The limitations of the human body will remain as a drag on progress in the human exploration of deep space. Given the laws of physics as we currently know them, it is difficult to envision human travel beyond the orbit of Mars with even the most optimistic extrapolations of current chemical rocket propulsion. One intriguing way out of this dilemma is suggested by Moore’s Law. If current trends continue, computers will contain the equivalent number of circuits as there are neurons in the human brain by about the year 2030. If one assumes an equivalence, then one could envision transferring the nature of human consciousness to a computer, which could then explore the cosmos unconstrained by a human body that currently is required to support it. This is the argument made by inventor Ray Kurzweil, who believes such a transfer of consciousness is inevitable (Kurzweil 1999). Of course the assumption of equivalence makes all the difference. We have already seen how the early predictions of artificially intelligent computers fell short. Having more and more circuits may not be enough to cross the threshold from “intelligence,” however defined, to “consciousness.” In this area it is best to leave such speculation to the science fiction writers. One may feel disappointed that the human exploration of space seems to be so constrained, but it is hard to maintain that feeling in the face of all the other exciting developments in aerospace that are happening all around that fact.

Bibliography

Abbate, Janet. Inventing the Internet. Cambridge, Massachusetts: MIT Press, 1999.

Ceruzzi, Paul. Beyond the Limits: Flight Enters the Computer Age. Cambridge, Massachusetts: MIT Press, 1989.

—, A History of Modern Computing. Cambridge, Massachusetts: MIT Press, 1998.

Dick, Steven J., y Roger Launius, eds. Societal Impact of Spaceflight. Washington, DC: NASA, 2007.

Kranz, Gene. Failure is not an Option. New York: Berkeley Books, 2000.

Kurzweil, Ray. The Age of Spiritual machines: When Computers Exceed Human Intelligence. New York: Viking, 1999.

Mackenzie, Donald. Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance. Cambridge, Massachusetts: MIT Press, 1990.

McCorduck, Pamela. Machines Who Think: A personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco, California: W.H. Freeman, 1979.

Mindell, David. Digital Apollo: Human and Machine in Spaceflight. Cambridge, Massachusetts: MIT Press, 2000.

Moore, Gordon. “Cramming More Components onto Integrated Circuits.” Electronics, April 19, 1965, 114-117.

Noble, David F. Forces of Production: A Social history of Industrial Automation. New York: Oxford University Press, 1986.

Norberg, Arthur, y Judy O’Neill. Transforming Computer Technology: Information Processing for the Pentagon, 1962-1986. Baltimore, Maryland: Johns Hopkins University Press, 1996.

Segaller, Stephen. Nerds: A Brief History of the Internet. New York: TV Books, 1998.

Tomakyo, James E. Computers in Spaceflight: the NASA Experience. New York: Marcel Dekker, Inc., 1987.

—, Computers Take Flight: A History of NASA’s Pioneering Digital Fly-By-Wire Project. Washington, DC: NASA, 2000.

Quote this content
Listening
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved