Created by Materia for OpenMind Recommended by Materia
17
Estimated reading time Time 17 to read

In an age fueled by knowledge and global markets, one might expect that knowledge would be bought and sold vigorously and often—and that knowledge markets would eclipse markets for tangible commodities such as wheat and pork bellies. Why haven’t markets for knowledge exploded, along with the Internet and the Web?

Markets

The Web gave us global electronic commerce, opening markets for small craftsmen, and allowing hundreds of millions to buy almost anything anywhere from their own home. Global search engines such as Google help any potential buyer find any potential seller. Market aggregators such as eBay and Amazon match rare and specialized interests. Paypal, credit cards, and electronic fund transfers move money effortlessly, whether the goods are physical or virtual. The infrastructure is global by default. Borders are crossed routinely.

But new knowledge is more complicated. There are markets for knowledge, such university-developed technology (iBridge Network), patents (Ocean Tomo), and even markets for solving tough problems (Innocentive). But markets for new knowledge are “thin” and weak. New knowledge is by definition unique. It is difficult or impossible to convey remotely through standardized transactions.

Transactions demand attention. And yes, the Web has enabled transactions at a distance, but it has also greatly enabled simple transfers. Many of us who paid attention to the early Internet thought that it would offer a smorgasbord of metered content. That was the model for electronic publishing as we knew it—i.e., high-value legal and medical information. But we were wrong. The Internet and the Web made free transfers so powerful and efficient (too powerful in the case of spam) that it made transactions look intellectually and psychologically demanding. Free enabled us to surf effortlessly. Imagine, information too cheap to meter! (As was once said about atomic energy.)

The cost of storing, distributing, and processing information plummeted. It turned out that, as costs evaporate, there are many ways of supporting information other than payment by the drink. Much of the content on the Web was, and is, volunteered. As the Web exploded, it turned out that information was not in short supply. Attention was the scarce resource. Advertising was missing in the noncommercial research environment in which the Internet arose, but, in the US, it was advertising that made television “free.” Advertising already covered most of the cost of newspapers and magazines in the large US markets. Maybe it could even cover all the costs if physical production and distribution could be eliminated, especially with the opportunity to reach new readers.

Free enabled entrepreneurs to build market share. Free got people in the door and engaged. The low costs of free created a huge opportunity for “first movers” in cyberspace. Powerful network effects suggested that each service or product category would produce only one winner, and that winner would capture the market.

Free information and content could build relationships and help sell almost anything that was not a mere commodity. Free versions sold premium versions (software). Free community sold tangible products (Amazon’s community of book reviewers). Volunteered contributions promoted reputations (programmers contributing to open source projects).

The glut of transaction-free information made competition for attention intense. Advertisers bought not just eyeballs but attention demonstrated by action (“click-throughs”). Websites got very sophisticated at matching viewers and advertisers. Google’s combination of algorithmic searches with paid listings was simple and stunningly effective at marrying free information and paid promotion, while keeping the two distinct. Most important, it made advertising far more efficient by linking it to specific words rather than crude demographics.

Knowledge

Paradoxically, we know too little about knowledge. Or perhaps there is too much to know. Knowledge is context-dependent and takes many different forms, whether embodied in things or in people. Knowledge packaged as “content,” such as newspapers and encyclopedias, behaves much like information. In a digital world, it can be easily reproduced and broadcast all over the globe, with or without the owner’s permission. But really valuable knowledge is unique, complex, and “sticky.” It often resides in multidisciplinary teams with close working relationships and includes knowledge in process and knowledge of what doesn’t work. This makes it difficult to measure, and for many, if you can’t measure it, it doesn’t count!

Certain forms of knowledge are better at generating numbers than others: for example, textbooks, encyclopedias, journal subscriptions, computer software, patents, licensing fees, enrollments, government funding, R&D expenditures, professional services, and salaried positions. Knowledge is embedded in mass-market products with very large numbers, such as movies and automobiles, although it just sits there inextricable and immutable. For the sake of economic growth, we want more than numbers. We want useful knowledge, valuable knowledge, knowledge that leads to innovation (or that prevents catastrophes).

We would like knowledge that contributes to productive enterprise, that creates more knowledge, and that leads to innovation or at least more knowledge, such as software that enables people to do new things in new ways. The more knowledge keeps producing, the more it looks like an asset, and the more valuable it is. One of the great moments in econometrics was the decision by the US Department of Commerce to treat software as an asset rather than as an expense in calculating the national accounts.

We also want people who create new knowledge or innovate. We often hear: “Our employees are our most valuable assets,” but people are not assets in the usual sense. Slavery and indentured servitude are long gone. Employees can walk out the door tomorrow—although you may be able to stop them from going to work for a competitor if they have signed a non-compete clause.

California does not enforce non-compete clauses, and this has been credited in part for the success of Silicon Valley. You may lose someone to a competitor’s project but you may gain access to the right person for your next project. Innovation depends on the flow of knowledge from different sources and directions, and smart knowledge workers may be more versatile, and valuable, when they are free to find the best fit.

Collaboration

Transactions can be as simple as they are on the floor of a commodity exchange—a straight sale of a well-known item: only the price changes. When there are unknowns, some negotiation may be needed, but the transaction may remain a single-shot deal. If both sides are happy, they may transact again, and again, building into a relationship in which the parties increasingly trust each other. This reduces the costs of transacting and allows an increase in the scale or depth of interaction. If it looks like a long-term relationship, the parties may exchange ideas and information alongside the transactions.

Just as it enables transactions and transfers, the Internet facilitates collaboration. Not only transaction-based relationships, but ongoing joint activities including contracted R&D. But the biggest impact of the Internet has been on many-to-many collaboration, in which diverse parties work together towards common ends.

Today we take for granted that we can have an ongoing group discussion by email. In the analog world, group discussions were only practical if everybody was in the same room—or, occasionally, on the same phone call. But in-person meetings and conference calls have to be scheduled, organized, and led. Email provides informal, spontaneous, tailorable alternatives to meetings, phone calls, and up and down the chain memos. Wikis enable structured communications and the aggregation of knowledge as a group project. Other forms of groupware support processes needed for software development and other projects.

These effects of information technology fit nicely with what institutional economists see as the rationale for the firm—a vehicle organizing certain activities more efficiently than in the market. Because the firm is under common ownership, knowledge can be exchanged freely within its walls without fear that it will be misappropriated and without the burden of entering into formal transactions. In theory, at least.

Back in the 1980s, there was no public Internet. Networks were private, and email was internal to the firm. IT promised to flatten hierarchies, accelerate the sharing of information, and make the knowledge of all employees available throughout the firm. Knowledge management was touted as a tool for optimizing the sharing and use of knowledge within the firm. Inspired by what IT could do, knowledge management recognized the need to overcome habit and engage people in effective sharing.

Other changes were underway, driven by global trade, increasing competition, the logic of specialization, and strategic focus. Companies divested themselves of units they saw becoming less competitive or less integral or complementary to core competence. The most famous example is IBM, which sold the PC business that had long reigned as an industry standard, as it focused increasingly on the provision of a full-range of IT-related services.

Open Innovation

Outsourcing was initially driven by the cost advantages in moving manufacturing to low-cost countries, such as China. But large companies began reconsidering the value of maintaining high-cost R&D labs. The not-invented-here syndrome withered as high-quality products and technology appeared from new sources worldwide. Product managers saw that they could often contract for or acquire technology on the outside as needed more efficiently than they could develop it in house—and without being obligated or locked in to whatever the company was producing. Nor of course did it make sense to be locked into a single outside partner. R&D management became more the art. It required an understanding of developments worldwide together with strategic acquisition, building relationships with other firms and universities, and learning to collaborate.

“Open innovation” means looking to the outside choices for innovation—specifically, the research, components, and other ingredients that the firm needs to develop innovative products and services. It does not necessarily mean “open” in the sense of nonproprietary, free, or transparent. But it implies understanding how the global innovation ecosystem works, not just a willingness to acquire pieces of technology from others.

As products and services have become more complex and supply chains have broadened and deepened, the nature of innovation has changed, in some sectors more than others. In systems industries, such as information and communications technology, innovation is less about isolated inventions and more about the way things go together—integration, interoperation, and design. In this context, value arises from sharing knowledge, not just capturing it and excluding others from using it.

New products and services do not come out of the blue, they build on functions and features that users know—and on standards that everyone in the industry uses. Investments build on other investments, past, present, and future, because components, systems, and habits are designed to work together. Common specifications at critical points keep producers from being locked into particular suppliers and users from being locked into producers. Users want their information to flow back and forth across product boundaries. Their biggest investment is the information itself, and they want as much freedom as possible to manage it as they see fit.

Infrastructure

The Internet is the driving paradigm for interoperability. It showed how an unregulated, nonproprietary platform could be rapidly picked up and used by anyone for a variety of purposes. Anyone could provide Internet services, and anyone could build new functionality on top of the Internet independent of the service provider. Unconstrained, either vertically or horizontally, network effects went wild. More connections, more uses, and more demand all fed each other. Unlike the proprietary networks of the 1980s, the Internet offered a public global addressing system that had two tiers mapping precisely to each other: numbers for routing and names for identification.

Once on the Internet, you could use it freely for email, remote log-in, file transfer, or any of the other services that might come along. You did not have to subscribe to each individually, and could even implement new services on your own, provided you could find others to interact with. Instead of “service” in the sense of one-way offering from a provider to a customer, “service” on the Internet was a commonly agreed-on protocol implementable by anyone, peers as well as providers. And the scope of the service was defined by the implementers: the distribution of an email to five people created its own network.

At the same time, data networking radically changed the economics of communications and information sharing: it offered digital text on a physical infrastructure that was built for voice and paid for by the costly economics of voice. Text is so efficiently encoded that adding it was virtually costless. Too cheap to meter.

And text is not just content. It can be searched, mapped, and matched against other text, and specify its own location. It can provide information about itself. Using domain names, it can create networks.

Introduced in 1993, the World Wide Web was a service so powerful that it created another platform on top of the Internet. The Web combined two protocols: HTTP, a protocol for linking and transmitting information over the Internet; and HTML, a protocol for displaying information. It was a higher level of infrastructure based purely on information—infrastructure that anyone could assemble if they knew how to imbed links in text and uplink linked pages.

Hyperlinks, both internal and external, provide context—a simple but important step from mere content toward knowledge. Now documents can define their relationship with each other and actively transcend their own boundaries. Previously, footnotes and bibliographic references required the reader to act and slowed the construction of context.

In 1911, Alfred North Whitehead wrote:

It is a profoundly erroneous truism, repeated by all copybooks and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of important operations which we can perform without thinking about them.

Of course, we want to think. We just do not want to be distracted by self-consciousness, routine operations, or unnecessary transactions. We do not want to pause to evaluate the transaction, seek budget approval, negotiate terms, or consult lawyers. We want our thinking agile and uninterrupted.

Information technology has given us the tools and the infrastructure to make research and analysis faster and more efficient. In many fields, working drafts are widely shared, often openly. We search on key terms to scope and calibrate our thinking. Search enables us not only to discover key documents but also to see the relationships among them. We can do all this with minimal attention to the process because what the technology is doing is buried out of sight and out of mind.

For academic researchers producing knowledge is closely tied to using knowledge, so the immediacy of the Web is very valuable. But it clashes with the vestiges of print culture. Ironically, the Web may work better for established scholars, who can post papers on open access servers where their work is quickly recognized and read. Young scholars lack name recognition and may be desperate to publish in prominent established journals that forbid prior exposure on the Web. The famous get more famous, while the unknown struggle in the shadow of the old print chain with its asymmetrical relationships, enforced exclusivity, and transactional barriers.

From Product to Process

The power of the emerging knowledge infrastructure puts more value on process, intellectual skills, and capacity. Peer-review validation and formal publication are still important, but as knowledge flows accelerate, leadership is seen in debate and exchange. We no longer fill students’ heads with knowledge, we teach them to think. Intellectual property is still important, but in technology-empowered, fast-moving environments, other factors are ascendant: absorptive capacity, learning curve mastery, and first-mover advantages.

In developed economies, the service sector now dominates—and the labor devoted to the production of things diminishes. Intense global competition has commoditized manufacturing, making it less profitable and attractive than differentiable services that build on long-term relationships and revenue streams. Services can be customized and enhanced to meet customer needs. Services build on skills uniquely available in advanced economies, including competencies associated with supply chain management, R&D coordination, and international asset deployment, marketing, and franchising.

Yet we know much more about manufacturing, agricultural, and mining than we know about services. Even basic data like R&D expenditures are problematic. Services are not an established part of the management curriculum. Major companies have pushed the case for “service science” as a subject of both research and education, but with little impact to date.

It is not even clear what we mean by “service.” The term evokes a fundamental asymmetry that distinguishes sellers from buyers, providers from customers. It suggests one-way delivery rather than a two-way relationship. Yet in an ecosystem where complements abound, it is not always clear which way is up—or down. Since value can be added from different directions, it makes more sense to speak of value clusters than of value chains. It is not the objects within the cluster that are important, but the vitality of the cluster and its ability to keep generating new value.

But how ecosystems keep generating new value is not intuitive to outsiders. Policymakers understand the pipeline model, in part because it looks like the assembly line for an automobile. Research goes in one end; universities turn research into patents, patents are licensed to companies, who turn them into products, and products come out the other end. Patents provide controlled exclusivity, which keeps the pipe intact and justifies the investment needed to keep the process flowing. The process is simply taken for granted, since it always looks the same.

Patents

It is tempting to see patents as the currency of the knowledge economy. Compared to other forms of knowledge, patents look like pieces of property with defined boundaries that can be controlled and transacted in the marketplace. In principle, patents promote public disclosure in return for the patent owner’s right to exclude others from using the technology. So they seem to solve the basic paradox of transacting knowledge. You don’t know what the value of knowledge is until you have it, but once you have it there is no need to pay for it.

The patent system was designed for a simpler world of machines and materials that did very specific things and were used to do those things without modification. However, information technology is distinguished by the extraordinary scope and scale of functional knowledge for an infinite variety of purposes that can be embedded in a very small space, such as chip or computer program loaded into memory. As the cost of transmission and storage has plummeted, a full-featured 10 Megabyte software program can be stored in a hard drive on “real estate” worth less than one-tenth of one cent. Yet, a single program will have thousands of “function points,” a measure of the complexity of the code (around 100,000 in Windows XP). The program will have many overlapping patentable functions at higher levels of abstraction as well, all the way up to the main purpose of the program. Most of this functionality is in the public domain, either because it was never patented or the patent has expired. However, unlike copyright law, patent law does not allow independent creation as a defense. So innovators are charged with knowledge of all patents. In principle, they are obliged to look—to do clearance searches to determine whether the product or service they are developing infringes someone else’s patent.

Where do they start? One person’s “clever hack” may be another person’s patent. The language used to describe software is abstract, ambiguous, and changes over time. The functions in your software must then be matched against what are often dozens of claims within the patent to evaluate the possibility of infringement. If it looks like there may be infringement, you can redesign your software to “invent around” the claims—or you can investigate further as to whether various claims within the patent are valid. Since it is commonly assumed that half of software patents are invalid, it may be worth assessing the validity of a problem patent. However, a legal opinion on infringement costs more than $13,000 on average in the US. If infringement appears possible, an opinion on the validity of the patent costs an additional $15,000+. These average figures are per patent, and since any function may be candidate for infringement, these figures can multiply very quickly for complex products, especially if the inventive step standard is low. In fact, it is much cheaper to seek a patent than to do product clearances, since applying does not even require searching. These high transaction costs make more sense in pharmaceuticals where there is one principal patent per product—but not for the complexity of IT.

Paradoxically, we think of digital technology as infinitely precise in the way it handles digital information and content. But patents on digital technology, especially software, are, as scholars describe it, merely “probablistic.” Major companies have dealt with the complexity of the technology and the proliferation and uncertainty of patents by building up large defensive portfolios and cross-licensing these portfolios to each other. This gives them “freedom to operate,” at least with respect to their principal competitors. However, small companies who bring few patents to the table are at a disadvantage and must pay for access to portfolios. They may be better off withdrawing from the product market and using their patents aggressively against companies producing for the market.

As noted, individual patents may help promote transactions in technology (such as contract R&D) because they allow sharing of knowledge to take place while preserving control under the patent. A patent-focused transaction may also help allocate risk and responsibility for unknown patents that may be owned by others.

But as transactions become complex and start to look more like Web-empowered collaboration, patents raise many questions about who controls how much, now and down the road. A simple joint research project requires agreement on who brings what patents to the project and how others in the project can use these rights. It also requires agreement on how technology developed in the course of the project will be owned, managed, and licensed—not only for the core collaborators but also for future collaborators, spin-offs, and outsiders. The more uncertainty in the project (and innovative projects tend toward uncertainty), the more difficult it will be to anticipate and address contingencies. What happens as collaborators come and go? How easy should entry and exit be? When does the project become a joint venture with continuing life—or a new company? Remember, that the easiest way to deal with coordination problems may be within the walls of a single firm. At the same time, information infrastructure enables many-to-many collaboration that previously could be done only within the firm relying heavily on face-to-face interaction.

Many of these problems arise in the development of information technology standards, a collaborative enterprise critical to advancing innovation. In earlier times, participants were far fewer and more homogenous. Patent interests and producers were well aligned, and everybody knew each other. Today an immense diversity of interests, large and small, upstream and downstream, converge on critical standards projects. There is advantage to hiding patents and asserting them only after the standard has been finalized, adopted, and widely implemented.

Where large numbers of implementers are expected, which is typically the case with software standards, there is great pressure to require that any patents be licensed royalty-free so the standard will be adopted quickly, widely, and without giving legal advantage to anyone. Yet this does not solve the problem of patent holders outside the process, who have agreed to nothing and may do well by ambushing the many users of a free, widely implemented standard.

Fences in Cyberspace

In the real world, borders are two-sided. They separate one jurisdiction from another—or ownership of one parcel of land from another. The standardized interface in digital technology is a similar common border. Like the fence in real space, it separates one component from another. But an interface is not just a bright line in the sand; it is a “smart border” that enables information to move across it.

A patent looks like a fence. But it is not a joint fence between two landowners established by common agreement on a common border. Rather, it is a fence constructed in words by one party, trying to claim as much as possible—against the world, rather than any identified neighbor.

Contrary to what many assume, patents are not rights to exploit technology. They are only rights to keep others from doing so—a negative right. Patents are fences, rather than the knowledge behind the fence. At least they are aspirational fences. Just where the fences are depends on what the claims mean, and what trial judges think they mean is overturned on appeal 30 to 40% of the time.

Nonetheless, the fences seem to work reasonably well in pharmaceuticals, where exclusivity is the norm, researchers read patents, borders are as well-defined as molecules, and the high costs of R&D and clinical testing more than justify the high costs of dealing with patents.

But the defensive portfolio races in IT are basically a way to overlook fences among competitors while buttressing market position (ideally by creating patent “thickets”) so as to discourage new entrants. High demand pushes patent offices toward a customer service model, which makes patents easy to get, for startups as well portfolio owners. However, companies fail, especially startups, and their patents end up acquired by a variety of patent aggregators, speculators, and “trolls.”

What drives value in these patent markets is the opportunity for arbitrage based on “being infringed.” The winners are those whose fences have been inadvertently embedded in somebody’s valuable product, and research shows that less than 3% of software patent lawsuits in the US allege copying. In other words, over 97% of infringement appears to be inadvertent.

How can this happen? As leading patent scholar Mark Lemley explains:

…both researchers and companies in component industries simply ignore patents. Virtually everyone does it. They do it at all stages of endeavor. From the perspective of an outsider to the patent system, this is a remarkable fact. And yet it may be what prevents the patent system from crushing innovation in component industries like IT.

As Texas Instruments (TI) testified before the Federal Trade Commission:

TI has something like 8000 patents in the United States that are active patents, and for us to know what’s in that portfolio, we think, is just a mind-boggling, budget-busting exercise to try to figure that out with any degree of accuracy at all.

And if a well-resourced company like TI doesn’t know what’s in its own portfolio, how can SMEs make sense of the hundreds of thousands of patents that they face in the marketplace?

As I would put it: In a virtual world where functional knowledge is massive and cheap, knowledge of patents has become virtually unaffordable.

How did we get here? Wasn’t the patent system supposed to be about promoting public disclosure of knowledge? How did the patents end up undermining the market for product and services?

Institutionalizing Ignorance

In a world gone global, patents remain territorial, a creation of national law that extends only to the border of the country. The TRIPS agreement, negotiated in the 1980s as part of the process behind the World Trade Organization, did not create a global patent system, nor did it harmonize national laws. The idea was to set minimum standards to which all countries could adhere.

TRIPS STATES :

…patents shall be available and patent rights enjoyable without discrimination as to the place of invention, the field of technology and whether products are imported or locally produced.

Slipped in between two broadly accepted principles of trade polity is a prohibition against discriminating against fields of technology. Where did that come from? Are technologies so anthropomorphic that they are victimized by discrimination? Isn’t knowledge all about discriminating among different things, so that they can be treated differently? Patents are awarded to technologies that are different, not to those that are the same.

The clause illustrates the dangers of international agreements negotiated in rarefied secrecy. It was put there to assure that all signatory countries would allow patents on drugs as products, but instead of making the pharmaceutical industry’s interest explicit, it was recast as a lofty principle of nondiscrimination. Despite the fact that this nondiscrimination provision was without precedent in any national laws, it became a virtually unchallengeable constitutional principle that appeared to lock the world into a naïve view of technology and an inability to develop evidence-based patent policy.

Scholars have argued persuasively that discrimination does not mean differentiation. But nuance is hard to sustain. When lawyers invoke “international obligations,” the conversation ends.

Conclusion

The institutionalized ignorance of TRIPS is only the most concrete sign of the general problem. The scope of knowledge has outgrown our ability to make sense of it. A coherent perspective on knowledge and where it is going in a world of weak borders may be too much to ask for. But we can at least see some of the gaps and failings.

The disciplines that we might look to are limited by their own epistemologies. What is, in a real sense, everybody’s business ends up being nobody’s business. Knowledge management could not be extended beyond the firm because it ran into legal controls on knowledge that did not operate within the firm. If service science is to connect, it must somehow assimilate collaboration science. The insularity of the patent system leads to discriminating results, disfavoring some and favoring others.

Knowledge today takes new and diverse forms that are addressed within different communities. It’s no longer just know-how, know-why, know-what, etc.

For example, there is the growing importance of software with its many aspects and levels of abstraction, the critical role of standards as a vehicle for moving information, layers of information infrastructure built on the Internet and the Web, the expanded role of patents (especially with respect to information technology and abstract subject matter), and the rise of social networks and environments. Is it even possible to look at such diverse forms as a functional whole? At the same time, we are increasingly aware that knowledge is sometimes a liability, that it can be incomplete, misleading, or infringing, as well as wrong.

Can we at least agree on words? There are many indispensable words that resist definition, and I admit to using many of them: networks, open, innovation, service, markets, and knowledge itself. They carry too much freight, too much nuance, too much context for simple public discourse By spawning unrecognized diversity, they end up meaning too much—and therefore meaning too little. Nonetheless, these words occupy a lot of space and are secure in their own inertia.

So I have used them.

Quote this content
Listening
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved