Imagine that in late 1995 someone told you that two groups of engineers were developing a critical piece of web infrastructure—the web server software that handles all the secure communications and payments, serves up pages, and runs the core functions of websites. The first group was Microsoft, then the most valuable software company in the world with a near monopoly hold over the operating system of personal computation; the second was a bunch of engineers, academics, amateurs, and people working for companies that were not engaged in this effort working in their spare time—who were developing the software and handing it out under a license that allowed anyone to copy the software, modify it, and distribute it as they pleased. Perhaps it is hard, as of this writing in 2013, to capture just how stupid the question “Who is going to win this race?” would have sounded to a reasonable person in 1995. And yet, the Apache web server, developed as Free and Open Source Software (FOSS) by the second group has systematically been adopted by a majority of websites over the past 18 years, through two boom and bust cycles. Microsoft trailed a distant second, while the third and fastest growing web server software, Nginx, was also FOSS. FOSS development has made inroads throughout the software platform. Mozilla Firefox has successfully cut into Microsoft Internet Explorer’s browser lead; about 80 percent of most scripting languages, like PHP, Ruby, or Python, are FOSS, and the FOSS operating system Linux dominates in infrastructure applications like server farms or high-end applications like supercomputing, and has expanded to a variety of embedded devices like set-top boxes, and sits at the heart of the Android mobile phone operating system.
FOSS is a critical example because its success is technically measurable, and its adoption is a clear market signal of its superiority in many fields. But the success of FOSS is not unique. If in February 2001 someone had shown Jimmy Wales’s new project, which consisted at the time of 900 stubs on the Web, stored on a web platform that allowed anyone to write and edit, but paid no one to do so, producing a product in which no one claimed exclusive proprietary rights, and claimed that within five years this product would be favorably compared to Britannica by the prestigious science magazine Nature and within less than a decade would put Microsoft’s Encarta encyclopedia out of business, they would have been laughed out of the room. And yet she moves.
Wikipedia and FOSS have become the foundational narratives for explaining the remarkable transformation in the organization of information production that occurred in the past two decades. The basic dynamic is clear. For the first time since the Industrial Revolution, the most important inputs, into some of the most important economic sectors, of the most advanced economies in the world, are radically distributed in the population. The core capital resources necessary for these core economic production activities—computation, communications, electronic storage, and most recently sensors—have become widely available in the populations of all wealthy countries, as well as widely available in the middle and wealthier classes of emerging economies. What prevented automobile enthusiasts from competing with General Motors was the sheer capital cost barriers of an assembly line. That constraint does not prevent Wikipedians or FOSS developers from competing with Britannica or Microsoft, respectively. What we have seen in the past 15 years is the emergence of a third modality of production, what I have calledsocial production. That is, people have always acted for social, emotional, or ideological reasons: talking to other people, taking photographs, singing, writing, helping each other move some furniture, or mobilizing for a common cause. The networked information economy has allowed some of these activities, driven by these same diverse motivations, to move from being extremely important socially but peripheral economically to occupying a significant space at the very heart of the most advanced economies in the world, at the very heart of the cultural and information production sectors, and increasingly at the heart of what it means to be citizens in a democratic society.
The emerging technological feasibility of social production generally, and peer production—the kind of large-scale collaboration of which Wikipedia is the most prominent example—more specifically, is interacting with the high rate of change and the increasing complexity of global innovation and production systems.
As complexity and the rate of change increase, twentieth-century organizational models are becoming too slow and too rigid to sense their environment, understand their limitations, to experiment with change, adapt to it, and adopt the innovations it necessitates.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and production techniques—techniques that accept that you can never assume that the best person or resource set for any given job is one that you already employ or with whom you have a well-defined contractual relationship. Instead, we see firms and other organizations adopting a range of models that permit for more fluid flows of information, talent, and projects across and among organizations depending on the degree of uncertainty associated with their activities. Social production in the commons becomes the outer fringe of these open strategies; where experimentation under conditions of extreme uncertainty and high complexity can be done on models that require no clear appropriation model, and therefore can be carried on with very high rates of failure.
Technology is not destiny. The possibility of radically distributed production of information, knowledge, and culture is continuously competing with strong centralizing trends. Pervasive monitoring of consumer behavior and the development of behavioral advertising seek to use the same networked technologies to achieve much greater control by companies that sit on Big Data repositories of the consumption and payment patterns of consumers. As free software matures, its advantages are being recognized by firms, and its practices are adopted and subtly altered so as to moderate some of the more radical effects on industrial organization its emergence presented. Government surveillance has improved dramatically in the past several years, and similarly presents serious opportunities for increased control, rather than increased decentralization, emerging from the adoption of ubiquitous, networked computation. It is important not to read this essay as a utopia that claims to be a prediction. Rather, it is a characterization of one possible future among several, a future that is a reasonably good description of the near past and of a future that could, but will not necessarily, stabilize in the years to come.
Information, Networks, and Commons
Information is a very unusual economic good. If a furniture factory makes a chair and I want it, I can buy it from them. If you then want a chair as well, the factory has to buy more wood, spend more energy on cutting and shaping it, and pay a carpenter to make a second chair. But information goods are not like that. Once Tolstoy wrote War and Peace, it doesn’t matter if three people or a million want to read it. Tolstoy need not spend one more second on writing the book (although the publisher needs to buy more paper, etc.). So too with the design of the lightbulb or a set of instructions on what the best way for surgeons to wash their hands before surgery. Once someone has figured out how to do something, everyone can learn it at the cost of duplication: the cost of printing another copy of the book; the cost of following the instructions to make a lightbulb. The information or innovation itself, once produced, is as Justice Louis Brandeis of the U.S. Supreme Court wrote a century ago, “free as the air to common use.” Now, if artists, inventors, or writers all gave their work away for free, we would have to find some other system to allow them to make a living, otherwise they would starve. The most common way we do so today is to grant them limited rights to license their insights and creations: copyrights or patents. Economists have long known and written that when these copyrights and patents are asserted, and consumers have to pay for a book or a lightbulb more than the simple cost of manufacturing the next copy, the consumers will be using that information less than would be most socially efficient in the short term. This is what in economics we call a public goods problem. But we usually are willing to give up some of the efficiency in order to make sure that writers and inventors can make a living, and we try to make up for the inefficiency by also having information produced with government funding: primarily for scientific and other scholarly research and for the arts.
What the quirky nature of information means in the networked environment, however, is that if there is a group of volunteers who can get together and create something—a video, an encyclopedia, or a software program—without having to be paid directly for it, they have solved the public goods problem in a way that doesn’t require them to close it up and charge for it.
More important than the availability of information at its efficient cost for consumers is its availability for subsequent innovators or creators. Existing information is one of the most important resources used to create new information goods. Newspaper stories are made of fresh reporting on the background of prior articles; academic articles require those that preceded them. Books, movies, music are all influenced by prior works, incorporating elements, ideas, or references and always operating within the same cultural conversation. And software perhaps more than all of these is a field typified by incremental innovation.
What ubiquitous computers and networked communications did in the 1990s was reduce the cost of communications and copying to near zero. Given that the information itself, once produced, is a public good (its marginal cost is zero), and that there were now millions of people who could use their time in socially fun, meaningful, or productive ways, and who could also use massive repositories of existing materials to make their own new products, the Internet created a new urgency to recognizing the role of commons in market society.
The commons is a way of allocating access and use rights in resources that does not give anyone exclusive rights to exclude anyone else.
A city street is a commons: anyone who has a car or a bicycle can drive on the road; anyone who can walk or use a wheelchair can travel the sidewalks. No individual or company has the right to exclude anyone or charge them for access. From streets and highways, to canals and waterways, major shipping lanes and navigable rivers; basic scientific knowledge, mathematical algorithms, basic ideas; all these have been kept as commons in modern market economies because they provide enormous freedom of action to a wide range of productive behaviors—both economic and social.
By the middle of the first decade of the twenty-first century, commons-based information, knowledge, and cultural production was flourishing. Much of it was with implied or express permission. Software developers in particular led the way with the development of Free and Open Source Software (FOSS). The major legal innovation of FOSS was that the software always came with a license that made it legal for anyone to take the software and not only use it but develop it further and release their improvements back into the commons. Effectively FOSS developed in a world in which all software is born exclusive property and gave developers a way to share their software with the world, to dedicate it to the commons of software developers. Ever since the late 1990s, there has been a powerful movement among academics to do the same thing; and there is a large and growing number of people who share their music, videos, photos, and online writings under a Creative Commons license, which takes the idea developed in FOSS and applies it well beyond software to all information goods that would otherwise be subject to the exclusive rights of copyright. Beyond the formal ways in which users created commons by licensing, there was a tremendous amount of sharing that happened without any formal rights. Remix culture emerged by people taking materials, often from the formal, rights-based entertainment world but not exclusively, and creating their own versions which were, in turn, remixed by others. Implicit permissions coupled with a background culture of open sharing and rising rhetoric of openness and commons made these practices universally adopted. It is important to note here that when I refer to the rise of commons-based production, I am not including the purely consumption uses—in particular peer-to-peer file sharing for no reason other than consumption without payment. While these practices have been demonized beyond their real cost, they are not themselves properly seen as part of the emergence of commons-based production.
What the adoption of commons-based practices allowed was a massive increase in the number, range, and diversity of actors engaged in production, rather than consumption, of information, knowledge, and culture. Beginning in the late nineteenth century, a series of technologies and organizational practices combined to train three generations in the habits of passive reception. Starting with the large-scale mechanical presses and automated typesetting innovations that led to the large-circulation, professionalized, advertising-supported newspapers in the late nineteenth century, through radio and the pinnacle of this culture—television—the cost of being a producer of information increased, as did the reach of those who were in a position to produce at such high costs. These developments were complemented by recorded music and film, both of which reduced the need for more widely distributed (and less hyper-qualified) musicianship, storytelling, and acting capacities. For three generations, audiences lost the capacity to make their own music, perform their own games and entertainment, or pass information and opinion locally and informally, and replaced these with an increasing dependence on a professionalized, mostly commercial model of production: the industrial information economy.
What ubiquitous networked computation has done is to reverse the technical, material conditions that led to that highly asymmetric information production structure.
But had all existing information been exclusive property, and had the practices of the newly creative people who had been passive audiences before not adopted widespread, promiscuous mutual borrowing—a commons—the potential of the technology would likely have been narrower. Only those who could make from scratch would have been able to transition from consumers to producers; and much of the culture of remixing, quoting, and curating materials for one another would have been too expensive and transactions costs too high to allow it to flourish.
One important practice within the domain of commons-based production was the emergence of peer production: large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have approached on their own. Wikipedia is the most widely visible and best-known example of peer production: a self-governing community of thousands of highly engaged contributors, and tens of thousands of individuals with lower but still active levels of participation. While it accounts for only a slice of the universe of social production in the networked commons, peer production is the most significant organizational innovation that has emerged from Internet-mediated social practice. Organizationally, it combines three core characteristics: (a) decentralization of conception and execution of problems and solutions, (b) harnessing diverse motivations, and (c) separation of governance and management from property and contract. First, unlike traditional organization, the question of what people should work on, what projects, subprojects, and intermediate steps, is not determined by an institutional hierarchy, but by self-selection and discussion among participants. Second, peer production allows many different people, with many different motivations, to collaborate on projects they share. This is particularly valuable in approaching problems that do not have a well-defined economic payoff. Such problems include those that are highly innovative and the likelihood of commercial success too low to fund participation, problems whose social value is high but whose nature prevents them from being delivered in a format that would support commercial appropriation, or because the sheer scope and diversity of human interest they seek to serve is too great for any single company to identify and serve on a paid model. The third aspect—the separation of management and governance from contract and property—is merely the organizational equivalent of the commons. Even within the organization or networked enterprise that is a peer-production community, the fact that the inputs and outputs are treated as commons allows the prior two elements—diversely motivated individuals—to act on the resource and project set without asking permission, because no property and contract rights need be negotiated to act.
Functionally, these components make peer-production practices highly adept at learning and experimentation, innovation, and adaptation in rapidly changing, persistently uncertain, and complex environments. Under high rates of technological innovation, and the high diversity of sources of uncertainty typical of early twenty-first-century global markets, the functional advantages of peer production have made it an effective organizational model in diverse domains. From free software through Wikipedia to video journalism, peer production plays a more significant role in the information production environment than predicted by standard models at the turn of the millennium.
The basic model of peer production simply focuses on minimizing transactions costs. Any production project requires the coordination of people, resources, and projects. In a classic perfect market, prices on each of these three components lead to matching. A firm expecting a given price for a project will be able to determine how much it can afford to pay for agents and resources for the project. The values of the competing projects, the value of the various people and resources to competing projects, will determine the market-clearing price for any given resource or person, and in turn will decide whether, when, and at what quality the project can be pursued given the market valuation of its output. Ronald Coase’s (1937) highly influential theory of the firm posited that for some resources, people, and projects, the cost of market clearance—finding the right people and resources, contracting for them, overcoming bargaining impasses, and so forth—would be so high that it is more efficient to have managers simply assign people and resources to projects, rather than running continuous auctions for how to get more paper to the printer on the third-floor suite. That is why we have firms.
Once one understands that social exchange is also a transactional framework widely used for a broad range of goods and services it is trivial to expand the classic transactions costs theory of the firm to social exchange networks. A market model of fixing a paper jam on the third-floor printer would be one where the person at the desk whose printer fails goes online, finds a printer tech support service, and pays them to come fix the printer. A managerial model would be one where it turns out to be more efficient for a manager to appoint a logistics person, who hires a tech support team once and then not every person who has a technical program needs to go to the market and run a search and service auction. Instead, the person on the third floor with the broken printer knows that all they need to is make a call to tech support. A social transactional model would posit the problem at home. The person with the broken printer walks over to their technology savvy neighbor and asks for help, which the neighbor gives willingly. Next week, maybe the first neighbor will reciprocate by watering the techie neighbor’s plants when he is away at a conference. There is no systematic reason why the transactions costs model cannot apply seamlessly to social exchange, which we use all the time in our everyday life without thinking about it. We have long used it extensively to solve economic problems with highly localized characteristics, from childcare and cooking, through other social insurance concerns against relatively minor disruption, to mundane things like short-distance moving of furniture within a home or a short-distance move between homes. But for most problems of economic significance, the motivations were too weak and the transactions costs too high to allow these networks to play a truly significant economic role. Ubiquitous networked communications and the unique properties of information as an economic good make that transactional framework more widely applicable to sophisticated economic production problems than was feasible during the earlier industrial era.
Complexity, Uncertainty, and Open Innovation
The simple transactions cost model of peer production can be supplemented with a more specific view of information and learning that explains why distributed innovation, creativity, or problem solving would have a transactions costs advantage over proprietary and managed systems. A more complete explanation requires a clearer model of how organizations learn. Both managerial control and price clearance require formalization of descriptions of resources, people (that is, their diverse capabilities and availabilities for a given project at a given juncture/time), and projects into units capable of transmission through the communications system these organizational models represent. The organizational and transactions costs associated with perfectly defining price, or perfectly defining for managerial assessment and decision making, over every potential resource or person that somewhat diverges from its neighbor in context and time, require abstraction, generalization, and standardization of the characteristics of the resources, people, and projects. Knowing what John or Jane specifically are able to do, given their hobbies or what they read last week, is an overwhelming information problem for a centralized managerial system, and is also an extremely difficult problem for a system that has to translate these capabilities into standardized prices—wages offered and demanded. Instead, what we see is both markets and organizations abstracting from the particularities of the individuals and the discrete resources to relatively stable markers of classes or types of resources—say, setting salaries based on education level or seniority. In that abstraction process, both administrative descriptions and prices are what technologists dealing in communications systems call lossy media: the formalization strips information out of the real-world characteristics of the relevant resources and projects. The lost information, in turn, leads systems whose functioning depends on discarding that information to underperform relative to systems able to bring a more refined fit of potential resources and agents to better-defined projects.
A global, networked economy in which there is enormous investment in innovation and in which innovation in one place can be used to compete in most other places is one in which complexity and uncertainty are increasing dramatically and at a rapid pace.
Complexity and uncertainty, in turn, make the information problem of matching people, resources, and projects less amenable to managerial or price-based solutions. Complexity and uncertainty put pressure on both neoclassical markets and the new institutional models of firms because the actual properties of resources, people, and projects are highly diver-se and interconnected; and the interactions among them are complex, in the sense that small differences in initial conditions or perturbations over time can significantly change the qualities of the interactions and outcomes at the system level. These lead to the known phenomenon of path dependence, both technological and institutional. That is, divergence from efficient and effective practice can persist in the face of systematic, observed inefficiency. The fine-grained, diverse qualities of people, projects, and resources, and the relatively significant divergences that can occur because of relatively fine-grained differences in input combinations or local interactions, mean that it is impossible to abstract and generalize the process into communications units available for managerial decision or price clearance without significant loss of information, control, and, ultimately, effectiveness.
Note that knowledge and learning in the presence of complexity and uncertainty refers to more than a classic notion of innovation, such as creating a new way of doing something that was impossible to do before. Importantly, it also includes problem solving, or iterative improvement in how something is done given persistent absence of complete knowledge about the problem and the solution. If creating the WWW or writable web software like Wiki was innovation on a commons-based model, Wikipedia’s organizational innovation is in problem solving more than innovation: how to maintain quality contributions together with potentially limitless expansion, a problem that scarcity absolved Britannica from solving. User-generated content similarly solves for serving more diverse tastes than a more centralized system can; user-created restaurant or hotel accommodations solve a complexity-in-implementation problem with highly diverse sites to review and tastes of people who may want to use the places reviewed. In each case, the peer approach allowed the organizations to explore a space of highly diverse interests and tastes that was too costly for more traditional organizations to explore.
In this model, a critical part of the advantage of peer production incorporates the importance of knowledge that you simply cannot contract for or manage well: either because it is tacit knowledge, or because the number and diversity of people with knowledge that needs to be brought to bear on an implementation problem is too great to contract for. Tacit knowledge is knowledge people possess, but in a form that they cannot communicate. Once you learn how to ride a bicycle, you know how to do so. Yet if you were to sit down and write a detailed memorandum, your reader would not know how to ride a bicycle. It is increasingly clear that tacit knowledge is critical in actual human systems. And peer production allows people to deploy their tacit knowledge directly, without losing much of it in the effort to translate it into the communicable form (an effort as futile as teaching how to ride a bike by writing a memo) necessary for decision making through prices or managerial hierarchies. Where knowledge is explicit, but highly distributed in forms that need to be collated to be effective, the barrier is a simple transactions costs problem. A system that allows agents to explore their environment for problems and solutions, experiment, learn, and iterate on solutions and their refinement without requiring intermediate formalizations to permit and fund the process will have an advantage over a system that does require those formalizations; and that advantage will grow as the uncertainty of what path to follow, who is best situated to follow it, and what class of solution approaches are most promising becomes less clearly defined.
Peer production more generally, in particular when it relies on commons—that is, on symmetrical access privileges (with or without use rules) to the resource without transaction—allows (a) diverse people, irrespective of organizational affiliation or property/contract nexus to a given resource or project, (b) dynamically to assess and reassess the available resources, projects, and potential collaborators, and (c) to self-assign to projects and collaborations. By leaving all these elements of the organization of a project to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, whether firm or governmental. It does so, of course, at the expense of incurring new kinds of coordination and self-organization costs. Where the physical capital requirements of a project are either very low, or capable of fulfillment by utilizing pre-existing distributed capital endowments (like personally owned computers), where the project is susceptible to modularization for incremental production pursued by diverse participants, and where the diversity gain from harnessing a wide range of experience, talent, insight, and creativity in innovation, quality, speed, or precision of connecting outputs to demand is high, peer production can emerge and outperform markets and hierarchies.
The benefits of peer production are sufficient that the practice has been widely adopted by firms and other more traditional organizations, including governments. In one study, for example, Josh Lerner and Mark Schankerman (2010) documented that 40 percent of commercial software firms develop some FOSS software. In another book, Charles Schweik and Robert English (2012) laid out the institutional motivations of both firms and governments to adopt these models. In these cases, the access to the diverse developer body and the openness of standards outweighs, for these organizations, the cost of lost appropriability. But the effect holds beyond software. Firms like Yelp or TripAdvisor succeeded against more established competitors in their businesses—restaurant reviews and travel guides, respectively—by building sophisticated platforms that allowed a much more diverse range of nonprofessionals to identify and review their respective targets. Again, in both cases, firms that built platforms for peer production outperformed firms that used more traditional managerial and contract-based approaches.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom to operate that typifies these two approaches and the manageability and appropriability that many more traditional organizations seek to preserve. Some firms are increasingly using competitions and prizes, such as Pfizer’s use of the Innocentive system, to diversify the range of people who work on their problems, without ceding proprietary or contractual control over the project. The prize model allows a firm to specify with greater or lesser degree of generality the problem they are trying to solve, place it on a platform that manages the competition, and allows anyone, from anywhere, to submit solutions. The firm then still gets to select its preferred solution and retain control, while paying anyone who is willing to work on the problem and does so successfully. This approach offers firms the core benefit of being able to attract a person whom the firm could never have identified through its own networks to work on a problem the firm has identified; what it loses is the diagnostic power of having many diverse people looking at the resource and project space in which the firm is situated, and identifying the potential for a new project, or diagnosing a problem the firm does not yet know it has. For that, more thoroughly open strategies are necessary.
Another increasingly critical strategic choice of many firms is participation in networks of firms engaging in a range of open collaborative innovation practices. Open collaborative innovation is a term used to describe a set of productive practices developed by firms operating in complex product and innovation-rich markets. These practices share with peer production the recognition that the smartest and best people to solve any given problem are unlikely to work in a single firm, the firm facing the challenge, and that models of innovation and problem solving that allow diverse people, from diverse settings, to work collaboratively on the problem will lead to better outcomes than production models that enforce strict boundaries at the edge of the firm and do not allow collaboration based on fit of person to task rather than based on employment contract and ownership of the problem. Firms might share employees, designs, and collocate employees for extensive periods in a project. They are likely to share intellectual property in the project, or often adopt open standards models that assure each that neither can defect from the collaborative arrangement. Legal scholars Ron Gilson, Hal Scott, and Charles Sabel (2008, 2010) have documented how these approaches have developed looser, more open contractual models than traditional supply contracts created in the past, a looseness that replicates some of the benefits of peer production and commons-based production that removes contractual encumbrances altogether. Open collaborative practice in networks of firms trades off the benefits of a fully open-to-the-world project definition that peer production or prize systems have, in exchange for having a more manageable set of people, resources, and projects to work with.
A final model of openness that mixes commons with property is the entrepreneurial model at the edge of academia and business. This is the model that typifies Silicon Valley, Cambridge, Massachusetts, and many self-consciously designed “innovation clusters” anchored around universities. On one side of this academia/entrepreneurship boundary sits the academic model that allows for investment in highly uncertain innovation at the very boundaries of science. The level of uncertainty and high social returns is such, that the initial funding for the work comes from government funding and is not intended to be captured commercially. The status-based economy of academia, the public funding, and the publication and presentation norms of academic science contribute both to experimentation and to wide dissemination of the findings under terms that allow others to build on and develop the work. They are the commons side of the interface. This, at least, is the idealized model, one that with declining research budgets and an increasing focus of universities on technology transfer revenues seems far from perfectly true. On the other side of the university-centered innovation cluster model are entrepreneurial firms: small, agile, and highly disposable. These allow for high-risk, high-reward investment models, which can experiment, prototype, adopt, and fail or grow on a much more rapid basis than traditional firms. They also provide a membrane for academics and young academic trainees, recent doctoral or postdoctoral students, to cycle out of the academic and into the market system, and back. Some of the larger firms with roots in this model, like Microsoft, Google, or Yahoo, have created research centers that seem to honor the academic model of free exploration to at least as great a degree as the more budget-constrained academic programs do, and increasingly people collaborate across this membrane. These models, in particular in the information technology side and less so in the biotechnology side, include much greater fidelity to commons-based models, free publication, free exchange with individuals without any contractual relations than do the open collaborative innovation models, and in turn they give up a degree of control and manageability.
What is important to understand about all these models is that they are diverse strategies for dealing with the same core set of challenges that increased complexity and uncertainty present. They all mark different points in a solution space that trades off manageability, effectiveness, and crisp definition of inputs, outputs, and processes for ease of experimentation, freedom to operate without constraint and permission processes, and the harnessing of diverse motivations, including in particular those that do not require translation into monetary terms that are themselves lossy.
Commons-based practices and open innovation offer freedom to operate in the face of the extreme challenges of planning under uncertainty and complexity. They provide an evolutionary model, typified by repeated experimentation, failure and survival, and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems with well-understood responses to well-behaved and reasonably predictable change. This model is built on experimentation and adaptation to a highly uncertain and changing environment, emphasizing innovation, resilience, and robustness over efficiency.
A decade ago, Wikipedia or FOSS were widely treated in mainstream economics and business circles as mere curiosities. Anyone who continues to think of them in these terms in the middle of the second decade of the twenty-first century does so at their own peril. Their success represents a core challenge to how we have thought about property and contract, organization theory and management over the past 150 years. Understanding why they have succeeded and what their particular strengths and limitations are has become indispensable for anyone who thinks about organizations in a networked information economy.
“The Nature of the Firm.” Economica 4, no. 16 (1937): 386–405.
Gilson, Ronald J., Charles F. Sabel. and Robert E. Scott.
“Contracting for Innovation: Vertical Disintegration and Interfirm Collaboration.” November 19, 2008. Columbia Law Review 109, no. 3 (April 2009); Columbia Law and Economics Working Paper No. 340; ECGI – Law Working Paper No. 118/2009. http://ssrn.com/abstract=1304283
———. “Braiding: The Interaction of Formal and Informal Contracting in Theory, Practice and Doctrine.” Stanford Law and Economics Olin Working Paper No. 389; Columbia Law and Economics Working Paper No. 367. January 11, 2010. http://ssrn.com/abstract=1535575 or http://dx.doi.org/10.2139/ssrn.1535575
Lerner, Josh, and Mark Schankerman.
The Comingled Code: Open Source and Economic Development. Cambridge, MA: MIT Press, 2010.
Schweik, Charles M., and Robert C. English.
Internet Success: A Study of Open-Source Software Commons. Cambridge, MA: MIT Press, 2012.