Who can afford to do professional work for nothing? What hobbyist can put 3-man years into programming, finding all the bugs, documenting his product and distribute it for free? The fact is, no one besides us has invested a lot of money into hobby software. We have written 6800 BASIC, and are writing 8080 APL and 6800 APL, but there is very little incentive to make this software available to hobbyists. Most directly, the thing you do is theft.4 14
He signed the letter "Bill Gates, General Partner, Micro-Soft." The hyphen would disappear in time. The philosophy stuck around. 15
Though there are quibbles about the facts in Gates's letter—critics claim he himself did a lot of free riding on public domain code and government-funded computer time—his basic point is that software needs to be protected by (enforceable) property rights if we expect it to be effectively and sustainably produced. Some software developers disagree. But assuming one concedes the point for the sake of argument, there is a second question: should software be covered by copyright or patent, or some unidentified third option? 16
In practice, software ended up being covered by both schemes, partly because of actions by Congress, which included several references to software in the Copyright Act, and partly as a result of decisions by the Copyright Office, the Patent and Trademark Office, and judges. One could copyright one's code and also gain a patent over the "nonobvious," novel, and useful innovations inside the software. 17
At first, it was the use of copyright that stirred the most concern. As I explained in the last chapter, copyright seems to be built around an assumption of diverging innovation—the fountain or explosion of expressive activity. Different people in different situations who sit down to write a sonnet or a love story, it is presumed, will produce very different creations rather than being drawn to a single result. Thus strong rights over the resulting work are not supposed to inhibit future progress. I can find my own muse, my own path to immortality. Creative expression is presumed to be largely independent of the work of prior authors. Raw material is not needed. "Copyright is about sustaining the conditions of creativity that enable an individual to craft out of thin air an Appalachian Spring, a Sun Also Rises, a Citizen Kane."5 18
There are lots of reasons to doubt that this vision of "creation out of nothing" works very well even in the arts, the traditional domain of copyright law. The story of Ray Charles's "I Got a Woman" bears ample witness to those doubts. But whatever its merits or defects in the realm of the arts, the vision seems completely wrongheaded when it comes to software. Software solutions to practical problems do converge, and programmers definitely draw upon prior lines of code. Worse still, as I pointed out earlier, software tends to exhibit "network effects." Unlike my choice of novel, my choice of word processing program is very strongly influenced, perhaps dominated, by the question of what program other people have chosen to buy. That means that even if a programmer could find a completely different way to write a word processing program, he has to be able to make it read the dominant program's files, and mimic its features, if he is to attract any customers at all. That hardly sounds like completely divergent creation. 19
Seeing that software failed to fit the Procrustean bed of copyright, many scholars presumed the process of forcing it into place would be catastrophic. They believed that, lacking patent's high standards, copyright's monopolies would proliferate widely. Copyright's treatment of follow-on or "derivative" works would impede innovation, it was thought. The force of network effects would allow the copyright holder of whatever software became "the standard" to extract huge monopoly rents and prevent competing innovation for many years longer than the patent term. Users of programs would be locked in, unable to shift their documents, data, or acquired skills to a competing program. Doom and gloom abounded among copyright scholars, including many who shared Mr. Gates's basic premise—that software should be covered by property rights. They simply believed that these were the wrong property rights to use. 20
Copyright did indeed cause problems for software developers, though it is hard to judge whether those problems outweighed the economic benefits of encouraging software innovation, production, and distribution. But the negative effects of copyright were minimized by a remarkably prescient set of actions by courts and, to a much lesser extent, Congress, so that the worst scenarios did not come to pass. Courts interpreted the copyright over software very narrowly, so that it covered little beyond literal infringement. (Remember Jefferson's point about the importance of being careful about the scope of a right.) They developed a complicated test to work out whether one program infringed the details of another. The details give law students headaches every year, but the effects were simple. If your software was similar to mine merely because it was performing the same function, or because I had picked the most efficient way to perform some task, or even because there was market demand for doing it that way, then none of those similarities counted for the purposes of infringement. Nor did material that was taken from the public domain. The result was that while someone who made literal copies of Windows Vista was clearly infringing copyright, the person who made a competing program generally would not be. 21
In addition, courts interpreted the fair use doctrine to cover "decompilation"—which is basically taking apart someone else's program so that you can understand it and compete with it. As part of the process, the decompiler had to make a copy of the program. If the law were read literally, decompilation would hardly seem to be a fair use. The decompiler makes a whole copy, for a commercial purpose, of a copyrighted work, precisely in order to cause harm to its market by offering a substitute good. But the courts took a broader view. The copy was a necessary part of the process of producing a competing product, rather than a piratical attempt to sell a copy of the same product. This limitation on copyright provided by fair use was needed in order to foster the innovation that copyright is supposed to encourage. This is a nice variation of the Sony Axiom from Chapter 4. 22
These rulings and others like them meant that software was protected by copyright, as Mr. Gates wanted, but that the copyright did not give its owner the right to prevent functional imitation and competition. Is that enough? Clearly the network effects are real. Most of us use Windows and most of us use Microsoft Word, and one very big reason is because everyone else does. Optimists believe the lure of capturing this huge market will keep potential competitors hungry and monopolists scared. The lumbering dominant players will not become complacent about innovation or try to grab every morsel of monopoly rent, goes the argument. They still have to fear their raptor-like competitors lurking in the shadows. Perhaps. Or perhaps it also takes the consistent threat of antitrust enforcement. In any event, whether or not we hit the optimal point in protecting software with intellectual property rights, those rights certainly did not destroy the industry. It appeared that, even with convergent creativity and network effects, software could be crammed into the Procrustean bed of copyright without killing it off in the process. Indeed, to some, it seemed to fare very well. They would claim that the easy legal protection provided by copyright gave a nascent industry just enough protection to encourage the investment of time, talent, and dollars, while not prohibiting the next generation of companies from building on the innovations of the past. 23
In addition, the interaction between copyright and software has produced some surprising results. There is a strong argument that it is the fact that software is copyrightable that has enabled the "commons-based creativity" of free and open source software. What does commons-based creativity mean? Basically, it is creativity that builds on an open resource available to all. An additional component of some definitions is that the results of the creativity must be fed back into the commons for all to use. Think of English. You can use English without license or fee, and you can innovate by producing new words, slang, or phrases without clearance from some Academie Anglaise. After you coin your term, it is in turn available to me to build upon or to use in my own sentences, novels, or jokes. And so the cycle continues. As the last chapter showed, for the entire history of musical creativity until the last forty years or so, the same had been true of at least a low level of musical borrowing. At the basic level of musical phrases, themes, snatches of melody, even chord structures, music was commons-based creativity. Property rights did not reach down into the atomic structure of music. They stayed at a higher level—prohibiting reproduction of complete works or copying of substantial and important chunks. So in some areas of both music and language, we had commons- based creativity because there were no property rights over the relevant level. The software commons is different. 24
The creators of free and open source software were able to use the fact that software is copyrighted, and that the right attaches automatically upon creation and fixation, to set up new, distributed methods of innovation. For example, free and open source software under the General Public License—such as Linux—is a "commons" to which all are granted access. Anyone may use the software without any restrictions. They are guaranteed access to the human-readable "source code," rather than just the inscrutable "machine code," so that they can understand, tinker, and modify. Modifications can be distributed so long as the new creation is licensed under the open terms of the original. This creates a virtuous cycle: each addition builds on the commons and is returned to it. The copyright over the software was the "hook" that allowed software engineers to create a license that gave free access and the right to modify and required future programmers to keep offering those freedoms. Without the copyright, those features of the license would not have been enforceable. For example, someone could have modified the open program and released it without the source code—denying future users the right to understand and modify easily. To use an analogy beloved of free software enthusiasts, the hood of the car would be welded shut. Home repair, tinkering, customization, and redesign become practically impossible. 25
Of course, if there were no copyright over software at all, software engineers would have other freedoms—even if not legally guaranteed open access to source code. Still, it was hard to deny that the extension of the property regime had—bizarrely, at first sight—actually enabled the creation of a continuing open commons. The tempting real estate analogy would be environmentalists using strong property rights over land to guarantee conservation and open access to a green space, where, without property rights, the space could be despoiled by all. But as I have pointed out earlier, while such analogies may help us, the differences between land and intellectual property demand that they be scrutinized very carefully. It is hard to overgraze an idea. 26
So much for copyright. What about patents? U.S. patent law had drawn a firm line between patentable invention and unpatentable idea, formula, or algorithm. The mousetrap could be patented, but not the formula used to calculate the speed at which it would snap shut. Ideas, algorithms, and formulae were in the public domain—as were "business methods." Or so we thought. 27
The line between idea or algorithm on the one hand and patentable machine on the other looks nice and easy. But put that algorithm—that series of steps capable of being specified in the way described by the Turing machine—onto a computer, and things begin to look more complex. Say, for example, that algorithm was the process for converting miles into kilometers and vice versa. "Take the first number. If it is followed by the word miles, then multiply by 8/5. If it is followed by the word kilometers, multiply by 5/8 . . ." and so on. In the abstract, this is classic public domain stuff—no more patentable than E=mc2 or F=ma. What about when those steps are put onto the tape of the Turing machine, onto a program running on the hard drive of a computer? 28
The Court of Appeals for the Federal Circuit (the United States's leading patent court) seems to believe that computers can turn unpatentable ideas into patentable machines. In fact, in this conception, the computer sitting on your desk becomes multiple patentable machines—a word processing machine, an e- mail machine, a machine running the program to calculate the tensile strength of steel. I want to stress that the other bars to patentability remain. My example of mile-to-kilometer conversion would be patentable subject matter but, we hope, no patent would be granted because the algorithm is not novel and is obvious. (Sadly, the Patent and Trademark Office seems determined to undermine this hope by granting patents on the most mundane and obvious applications.) But the concern here is not limited to the idea that without a subject matter bar, too many obvious patents will be granted by an overworked and badly incentivized patent office. It is that the patent was supposed to be granted at the very end of a process of investigation and scientific and engineering innovation. The formulae, algorithms, and scientific discoveries on which the patented invention was based remained in the public domain for all to use. It was only when we got to the very end of the process, with a concrete innovation ready to go to market, that the patent was to be given. Yet the ability to couple the abstract algorithm with the concept of a Turing machine undermines this conception. Suddenly the patents are available at the very beginning of the process, even to people who are merely specifying—in the abstract—the idea of a computer running a particular series of algorithmic activities. 29
The words "by means of a computer" are—in the eyes of the Federal Circuit—an incantation of magical power, able to transubstantiate the ideas and formulae of the public domain into private property. And, like the breaking of a minor taboo that presages a Victorian literary character's slide into debauchery, once that first wall protecting the public domain was breached, the court found it easier and easier to breach still others. If one could turn an algorithm into a patentable machine simply by adding "by means of a computer," then one could turn a business method into something patentable by specifying the organizational or information technology structure through which the business method is to be implemented. 30
If you still remember the first chapters of this book, you might wonder why we would want to patent business methods. Intellectual property rights are supposed to be handed out only when necessary to produce incentives to supply some public good, incentives that otherwise would be lacking. Yet there are already plenty of incentives to come up with new business methods. (Greed and fear are the most obvious.) There is no evidence to suggest that we need a state-backed monopoly to encourage the development of new business methods. In fact, we want people to copy the businesses of others, lowering prices as a result. The process of copying business methods is called "competition" and it is the basis of a free-market economy. Yet patent law would prohibit it for twenty years. So why introduce patents? Brushing aside such minor objections with ease, the Court of Appeals for the Federal Circuit declared business methods to be patentable. Was this what Jefferson had in mind when he said "I know well the difficulty of drawing a line between the things which are worth to the public the embarrassment of an exclusive patent, and those which are not"? I doubt it. 31
It is commonplace for courts to look at the purpose of the law they are enforcing when seeking to understand what it means. In areas of regulation which are obviously instrumental—aimed at producing some particular result in the world—that approach is ubiquitous. In applying the antitrust laws, for example, courts have given meaning to the relatively vague words of the law by turning to economic analysis of the likely effects of different rules on different market structures. 32
Patent law is as instrumental a structure as one could imagine. In the United States, for example, the constitutional authorization to Congress to pass patent and copyright legislation is very explicit that these rights are to be made with a purpose in view. Congress has the power "to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries." One might imagine that courts would try to interpret the patent and copyright laws with that purpose, and the Jefferson Warning about its constraints, firmly in mind. Yet utilitarian caution about extending monopolies is seldom to be found in the reasoning of our chief patent court. 33
The difference is striking. Jefferson said that the job of those who administered the patent system was to see if a patent was "worth the embarrassment to the public" before granting it. The Constitution tells Congress to make only those patent laws that "promote the progress of science and useful arts." One might imagine that this constitutional goal would guide courts in construing those same laws. Yet neither Jeffersonian ideals nor the constitutional text seem relevant to our chief patent court when interpreting statutory subject matter. Anything under the sun made by man is patentable subject matter, and there's an end to it. The case that announced the rule on business methods involved a patent on the process of keeping accounts in a "hub- and-spoke" mutual fund—which included multiplying all of the stock holdings of each fund in a family of funds by the respective current share price to get total fund value and then dividing by the number of mutual fund shares that each customer actually holds to find the balance in their accounts. As my son observed, "I couldn't do that until nearly the end of third grade!"6 34
In theory of course, if the patent is not novel or is obvious, it will still be refused. The Supreme Court recently held that the Court of Appeals for the Federal Circuit has made "nonobvious" too easy a standard to meet.7 It is unclear, however, whether that judgment will produce concrete effects on actual practices of patent grants and litigation. The Patent and Trademark Office puts pressure on examiners to issue patents, and it is very expensive to challenge those that are granted. Better, where possible, to rule out certain subject matter in the first place. Tempted in part by its flirtation with the "idea made machine" in the context of a computer, the Court of Appeals for the Federal Circuit could not bring itself to do so. Where copyright law evolved to wall off and minimize the dangers of extending protection over software, patent law actually extended the idea behind software patents to make patentable any thought process that might produce a useful result. Once breached, the walls protecting the public domain in patent law show a disturbing tendency to erode at an increasing rate. 35
To sum up, the conceptual possibilities presented to copyright and patent law by the idea of a Turing machine were fascinating. Should we extend copyright or patent to cover the new technology? The answer was "we will extend both!" Yet the results of the extension were complex and unexpected in ways that we will have to understand if we want to go beyond the simple but important injunctions of Jefferson and Macaulay. Who would have predicted that software copyrights could be used to create a self-perpetuating commons as well as a monopoly over operating systems, or that judges would talk knowingly of network effects in curtailing the scope of coverage? Who would have predicted that patents would be extended not only to basic algorithms implemented by a computer, but to methods of business themselves (truly a strange return to legalized business monopolies for a country whose founders viewed them as one of the greatest evils that could be borne)? 36
SYNTHETIC BIOLOGY 37
If you are a reader of Science, PLoS Biology, or Nature, you will have noticed some attractive and bizarre photographs recently. A field of bacteria that form themselves into bull's- eyes and polka dots. A dim photograph of a woman's face "taken" by bacteria that have been programmed to be sensitive to light. You may also have read about more inspiring, if less photogenic, accomplishments—for example, the group of scientists who managed to program bacteria to produce artemesinin, a scarce natural remedy for malaria derived from wormwood. Poking deeper into these stories, you would have found the phrase "synthetic biology" repeated again and again, though a precise definition would have eluded you. 38
What is "synthetic biology"? For some it is simply that the product or process involves biological materials not found in nature. Good old-fashioned biotechnology would qualify. One of the first biotechnology patent cases, Diamond v. Chakrabarty, involved some bacteria which Dr. Chakrabarty had engineered to eat oil slicks—not their natural foodstuff.8 The Supreme Court noted that the bacteria were not found in nature and found them to be patentable, though alive. According to the simplest definition, Dr. Chakrabarty's process would count as synthetic biology, though this example antedates the common use of the term by two decades. For other scientists, it is the completely synthetic quality of the biology involved that marks the edge of the discipline. The DNA we are familiar with, for example, has four "base pairs"— A, C, G, and T. Scientists have developed genetic alphabets that involve twelve base pairs. Not only is the result not found in nature, but the very language in which it is expressed is entirely new and artificial. 39
I want to focus on a third conception of synthetic biology: the idea of turning biotechnology from an artisanal process of one- off creations, developed with customized techniques, to a true engineering discipline, using processes and parts that are as standardized and as well understood as valves, screws, capacitors, or resistors. The electrical engineer told to build a circuit does not go out and invent her own switches or capacitors. She can build a circuit using off-the-shelf components whose performance is expressed using standard measurements. This is the dream of one group of synthetic biologists: that biological engineering truly become engineering, with biological black boxes that perform all of the standard functions of electrical or mechanical engineering—measuring flow, reacting to a high signal by giving out a low signal, or vice versa, starting or terminating a sequence, connecting the energy of one process to another, and so on. 40
Of course an engineer understands the principle behind a ratchet, or a valve, but he does not have to go through the process of thinking "as part of this design, I will have to create a thing that lets stuff flow through one way and not the other." The valve is the mechanical unit that stands for that thought, a concept reified in standardized material form which does not need to be taken apart and parsed each time it is used. By contrast, the synthetic biologists claim, much of current biotechnological experimentation operates the way a seventeenth- century artisan did. Think of the gunsmith making beautiful one- off classics for his aristocratic patrons, without standardized calibers, parts, or even standard-gauge springs or screws. The process produces the gun, but it does not use, or produce, standard parts that can also be used by the next gunsmith. 41
Is this portrayal of biology correct? Does it involve some hyping of the new hot field, some denigration of the older techniques? I would be shocked, shocked, to find there was hype involved in the scientific or academic enterprise. But whatever the degree to which the novelty of this process is being subtly inflated, it is hard to avoid being impressed by the projects that this group of synthetic biologists has undertaken. The MIT Registry of Standard Biological Parts, for example, has exactly the goal I have just described. 42
The development of well-specified, standard, and interchangeable biological parts is a critical step towards the design and construction of integrated biological systems. The MIT Registry of Standard Biological Parts supports this goal by recording and indexing biological parts that are currently being built and offering synthesis and assembly services to construct new parts, devices, and systems. . . . In the summer of 2004, the Registry contained about 100 basic parts such as operators, protein coding regions, and transcriptional terminators, and devices such as logic gates built from these basic parts. Today the number of parts has increased to about 700 available parts and 2000 defined parts. The Registry believes in the idea that a standard biological part should be well specified and able to be paired with other parts into subassemblies and whole systems. Once the parameters of these parts are determined and standardized, simulation and design of genetic systems will become easier and more reliable. The parts in the Registry are not simply segments of DNA, they are functional units.9 43
Using the Registry, a group of MIT scientists organizes an annual contest called iGEM, the International Genetically Engineered Machine competition. Students can draw from the standard parts that the Registry contains, and perhaps contribute their own creations back to it. What kinds of "genetically engineered machines" do they build? 44
A team of eight undergraduates from the University of Ljubljana in Slovenia— cheering and leaping onto MIT's Kresge Auditorium stage in green team T-shirts— won the grand prize earlier this month at the International Genetically Engineered Machine (iGEM) competition at MIT. The group—which received an engraved award in the shape of a large aluminum Lego piece—explored a way to use engineered cells to intercept the body's excessive response to infection, which can lead to a fatal condition called sepsis. The goal of the 380 students on 35 university teams from around the world was to build biological systems the way a contractor would build a house—with a toolkit of standard parts. iGEM participants spent the summer immersed in the growing field of synthetic biology, creating simple systems from interchangeable parts that operate in living cells. Biology, once thought too complicated to be engineered like a clock, computer or microwave oven, has proven to be open to manipulation at the genetic level. The new creations are engineered from snippets of DNA, the molecules that run living cells.10 45
Other iGEM entries have included E. coli bacteria that had been engineered to smell like wintergreen while they were growing and dividing and like bananas when they were finished, a biologically engineered detector that would change color when exposed to unhealthy levels of arsenic in drinking water, a method of programming mouse stem cells to "differentiate" into more specialized cells on command, and the mat of picture-taking bacteria I mentioned earlier. 46
No matter how laudable the arsenic detector or the experimental technique dealing with sepsis, or how cool the idea of banana- scented, picture-taking bacteria, this kind of enterprise will cause some of you to shudder. Professor Drew Endy, one of the pioneers in this field, believes that part of that reaction stems from simple novelty. "A lot of people who were scaring folks in 1975 now have Nobel prizes."11 But even if inchoate, the concerns that synthetic biology arouses stem from more than novelty. There is a deep-seated fear that if we see the natural world of biology as merely another system that we can routinely engineer, we will have extended our technocratic methods into a realm that was only intermittently subject to them in a way that threatens both our structure of self-understanding and our ecosystem. 47
To this, the synthetic biologists respond that we are already engineering nature. In their view, planned, structured, and rationalized genetic engineering poses fewer dangers than poorly understood interventions to produce some specific result in comparative ignorance of the processes we are employing to do so. If the "code" is transparent, subject to review by a peer community, and based on known parts and structures, each identified by a standard genetic "barcode," then the chance of detecting problems and solving them is higher. And while the dangers are real and not to be minimized, the potential benefits—the lives saved because the scarce antimalarial drug can now be manufactured by energetic E. coli or because a cheap test can demonstrate arsenic contamination in a village well—are not to be minimized either. 48
I first became aware of synthetic biology when a number of the scientists working on the Registry of Standard Biological Parts contacted me and my colleague Arti Rai. They did not use these exact words, but their question boiled down to "how does synthetic biology fare in intellectual property's categories, and how can we keep the basics of the science open for all to use?" As you can tell from this book, I find intellectual property fascinating—lamentably so perhaps. Nevertheless, I was depressed by the idea that scientists would have to spend their valuable time trying to work out how to save their discipline from being messed up by the law. Surely it would be better to have them doing, well, science? 49
They have cause for concern. As I mentioned at the beginning of this chapter, synthetic biology shares characteristics of both software and biotechnology. Remember the focus on reducing functions to black boxes. Synthetic biologists are looking for the biological equivalents of switches, valves, and inverters. The more abstractly these are described, the more they come to resemble simple algebraic expressions, replete with "if, then" statements and instructions that resolve to "if x, then y, if not x, then z." 50
If this sounds reminiscent of the discussion of the Turing machine, it should. When the broad rules for software and business methods were enunciated by the federal courts, software was already a developed industry. Even though the rules would have allowed the equivalent of patenting the alphabet, the very maturity of the field minimized the disruption such patents could cause. Of course "prior art" was not always written down. Even when it was recorded, it was sometimes badly handled by the examiners and the courts, partly because they set a very undemanding standard for "ordinary expertise" in the art. Nevertheless, there was still a lot of prior experience and it rendered some of the more basic claims incredible. That is not true in the synthetic biology field. 51
Consider a recent article in Nature, "A universal RNAi-based logic evaluator that operates in mammalian cells."12 The scientists describe their task in terms that should be familiar. "A molecular automaton is an engineered molecular system coupled to a (bio)molecular environment by 'flow of incoming messages and the actions of outgoing messages,' where the incoming messages are processed by an 'intermediate set of elements,' that is, a computer." The article goes on to describe some of the key elements of so-called "Boolean algebra"— "or," "and," "not," and so on—implemented in living mammalian cells. 52
These inscriptions of Boolean algebra in cells and DNA sequences can be patented. The U.S. Department of Health and Human Services, for example, owns patent number 6,774,222: 53
This invention relates to novel molecular constructs that act as various logic elements, i.e., gates and flip-flops. . . . The basic functional unit of the construct comprises a nucleic acid having at least two protein binding sites that cannot be simultaneously occupied by their cognate binding protein. This basic unit can be assembled in any number of formats providing molecular constructs that act like traditional digital logic elements (flips-flops, gates, inverters, etc.). 54
My colleagues Arti Rai and Sapna Kumar have performed a patent search and found many more patents of similar breadth.13 55
What is the concern? After all, this is cutting-edge science. These seem like novel, nonobvious inventions with considerable utility. The concern is that the change in the rules over patentable subject matter, coupled with the Patent and Trademark Office's handling of both software and biotechnology, will come together so that the patent is not over some particular biological circuit, but, rather, over Boolean algebra itself as implemented by any biotechnological means. It would be as if, right at the beginning of the computer age, we had issued patents over formal logic in software—not over a particular computer design, but over the idea of a computer or a binary circuit itself. 56
"By means of a computer" was the magic phrase that caused the walls around the public domain of algorithms and ideas to crumble. Will "by means of a biological circuit" do the same? And—to repeat the key point—unlike computer science, biotechnology is developing after the hypertrophy of our intellectual property system. We do not have the immune system provided by the established practices and norms, the "prior art," even the community expectations that protected software from the worst effects of patents over the building blocks of science. 57
Following the example of software, the founders of the MIT Registry of Standard Biological Parts had the idea of protecting their discipline from overly expansive intellectual property claims by turning those rights against themselves. Free and open source software developers have created a "commons" using the copyright over the code to impose a license on their software, one that requires subsequent developers to keep the source open and to give improvements back to the software commons—a virtuous cycle. Could the Registry of Standard Biological Parts do the same thing? The software commons rests on a license. But, as I pointed out in the last section, the license depends on an underlying property right. It is because I have automatic copyright over my code that I can tell you "use it according to these terms or you will be violating my copyright." Is there a copyright over the products of synthetic biology? To create one we would have to take the extension of copyright that was required to reach software and stretch it even further. Bill Gates might argue for intellectual property rights over software using the logic of his article in Dr. Dobb's Journal. Will the argument for copyrights over synthetic biological coding be "I need the property right so I can create a commons"? 58
In practice, I think the answer is, and should be, no. Of course, one could think of this as just another type of coding, making expressive choices in a code of A's, C's, G's, and T's, just as a programmer does in Java or C??. Yet, software was already a stretch for copyright law. Synthetic biology strikes me as a subject matter that the courts, Congress, and the Copyright Office are unlikely to want to cram into copyright's already distorted outlines— particularly given the obvious availability of patent rights. As a matter of conceptual intuition, I think they will see biological subject matter as harder to fit into the categories of original expressive writing. On one level, yes, it is all information, but, on another level, the idea of programming with gene sequences will probably raise hackles that the idea of coding inside a programming language never would. As a normative matter, I think it would be a poor choice to apply copyright to the products of synthetic biology. Attempting to produce a particular open commons, one might enable the kind of hundred-year monopolies over functional objects that the critics of software copyright initially feared. 59
If one wishes to keep the basic ideas and techniques of synthetic biology open for subsequent innovators, there are alternatives to the idea of a synthetic biology open source license. The Registry of Standard Biological Parts or the BioBricks Foundation can simply put all their work into the public domain immediately. (This, indeed, is what they are currently doing.) Such a scheme lacks one key feature of open source software: the right to force subsequent innovators to release their code back into the commons. Yet it would make subsequent patents on the material impossible, because it had already been published. 60
Regardless of the decisions made about the future of synthetic biology, I think its story—coupled to that of software and biotechnology more generally—presents us with an important lesson. I started the chapter with the metaphor of Procrustes's bed. But in the case of software and biotechnology, both the bed—the categories of copyright and patent—and its inhabitants—the new technologies—were stretched. Cracks formed in the boundaries that were supposed to prevent copyright from being applied to functional articles, to prevent patents extending to cover ideas, algorithms, and business methods. 61
Until this point, though the science would have been strange to Jefferson or his contemporaries, the underlying issue would have been familiar. The free-trade, Scottish Enlightenment thinkers of the eighteenth and nineteenth centuries would have scoffed at the idea that business methods or algorithms could be patented, let alone that one could patent the "or," "if-then," and "not" functions of Boolean algebra as implemented by a biological mechanism. The response, presumably, is to fine tune our patent standards—to patent the mousetrap and the corkscrew, not the notion of catching mice or opening bottles by mechanical means. Still less should we allow the patenting of algebra. These are fine points. Later scholarship has added formulae, data, and historical analysis to back up Jefferson's concerns, while never surpassing his prose. As I said at the beginning of the book, if we were to print out the Jefferson Warning and slip it into the shirt pocket of every legislator and regulator, our policy would be remarkably improved. 62
But it is here that the story takes a new turn, something that neither Jefferson nor the philosophers of the Scottish Enlightenment had thought of, something that goes beyond their cautions not to confuse intellectual property with physical property, to keep its boundaries, scope, and term as small as possible while still encouraging the desired innovation. 63
Think of the reaction of the synthetic biologists at MIT. They feared that the basic building blocks of their new discipline could be locked up, slowing the progress of science and research by inserting intellectual property rights at the wrong point in the research cycle. To solve the problem they were led seriously to consider claiming copyright over the products of synthetic biology—to fight overly broad patent rights with a privately constructed copyright commons, to ride the process of legal expansion and turn it to their own ends. As I pointed out earlier, I think the tactic would not fare well in this particular case. But it is an example of a new move in the debate over intellectual property, a new tactic: the attempt to create a privately constructed commons where the public domain created by the state does not give you the freedom that you believe creativity needs in order to thrive. It is to that tactic, and the distributed creativity that it enables, that I will turn to now.
If you go to the familiar Google search page and click the intimidating link marked "advanced search," you come to a page that gives you more fine-grained control over the framing of your query. Nestled among the choices that allow you to pick your desired language, or exclude raunchy content, is an option that says "usage rights." Click "free to use or share" and then search for "physics textbook" and you can download a 1,200-page physics textbook, copy it, or even print it out and hand it to your students. Search for "Down and Out in the Magic Kingdom" and you will find Cory Doctorow's fabulous science fiction novel, online, in full, for free. His other novels are there too—with the willing connivance of his commercial publisher. Search for "David Byrne, My Fair Lady" and you will be able to download Byrne's song and make copies for your friends. You'll find songs from Gilberto Gil and the Beastie Boys on the same page. No need to pay iTunes or worry about breaking the law. 2
Go to the "advanced" page on Flickr, the popular photo sharing site, and you will find a similar choice marked "Creative Commons License." Check that box and then search for "Duke Chapel" and you will get a selection of beautiful photos of the lovely piece of faux Gothic architecture that sits about three hundred yards from the office where I am writing these words. You can copy those photos, and 66 million others on different subjects, share them with your friends, print them for your wall, and, in some cases, even use them commercially. The same basic tools can be found on a range of specialized search engines with names like OWL Music Search, BlipTV, SpinExpress, and OERCommons. Searching those sites, or just sticking with the advanced options on Google or Yahoo, will get you courses in music theory, moral philosophy, and C++ programming from famous universities; a full-length movie called Teach by Oscar-winning director Davis Guggenheim; and free architectural drawings that can be used to build low-cost housing. At the Wellcome Library, you will find two thousand years of medical images that can be shared freely. Searching for "skeleton" is particularly fun. You can even go to your favorite search engine, type in the title of this book, find a site that will allow you to download it, and send the PDF to a hundred friends, warmly anticipating their rapturous enjoyment. (Better ask them first.) 3
All this copying and sharing and printing sounds illegal, but it is not (at least if you went through the steps I described). And the things you can do with this content do not stop with simply reproducing it, printing it on paper, or sending it by e-mail. Much of it can be changed, customized, remixed—you could rewrite the module of the class and insert your own illustrations, animate the graphs showing calculus in action, morph the photo into something new. If you search for a musician with the unpromising name "Brad Sucks," you will find a Web site bearing the modest subtitle "A one man band with no fans." Brad, it turns out, does not suck and has many fans. What makes him particularly interesting is that he allows those fans, or anyone else for that matter, to remix his music and post their creations online. I am particularly fond of the Matterovermind remix of "Making Me Nervous," but it may not be to your taste. Go to a site called ccMixter and you will find that musicians, famous and obscure, are inviting you to sample and remix their music. Or search Google for Colin Mutchler and listen to a haunting song called "My Life Changed." Mr. Mutchler and a violinist called Cora Beth Bridges whom he had never met created that song together. He posted a song called "My Life" online, giving anyone the freedom to add to it, and she did—"My Life." Changed. 4
On December 15, 2002, in San Francisco, a charitable organization called Creative Commons was launched. (Full disclosure: I have been a proud board member of Creative Commons since its creation.) Creative Commons was the brainchild of Larry Lessig, Hal Abelson, and Eric Eldred. All the works I have just described—and this book itself—are under Creative Commons licenses. The authors and creators of those works have chosen to share it with the world, with you, under generous terms, while reserving certain rights for themselves. They may have allowed you to copy it, but not to alter it—to make derivative works. Or they may have allowed you to use it as you wish, so long as you do so noncommercially. Or they may have given you complete freedom, provided only that you attribute them as the owner of the work. There are a few simple choices and a limited menu of permutations. 5
What makes these licenses unusual is that they can be read by two groups that normal licenses exclude—human beings (rather than just lawyers) and computers. The textbooks, photos, films, and songs have a tasteful little emblem on them marked with a "cc" which, if you click on it, links to a "Commons Deed," a simple one-page explanation of the freedoms you have. There are even icons—a dollar with a slash through it, for example—that make things even clearer. Better still, the reason the search engines could find this material is that the licenses also "tell" search engines exactly what freedoms have been given. Simple "metadata" (a fancy word for tags that computers can read) mark the material with its particular level of freedoms. This is not digital rights management. The license will not try to control your computer, install itself on your hard drive, or break your TV. It is just an expression of the terms under which the author has chosen to release the work. That means that if you search Google or Flickr for "works I am free to share, even commercially," you know you can go into business selling those textbooks, or printing those photos on mugs and T-shirts, so long as you give the author attribution. If you search for "show me works I can build on," you know you are allowed to make what copyright lawyers call "derivative works." 6
The idea behind Creative Commons was simple. As I pointed out in the first chapter, copyright adheres automatically on "fixation." As soon as you lift the pen from the paper, click the shutter, or save the file, the work is copyrighted. No formalities. No need even to use the little symbol (C). Once copyrighted, the work is protected by the full might of the legal system. And the legal system's default setting is that "all rights are reserved" to the author, which means effectively that anyone but the author is forbidden to copy, adapt, or publicly perform the work. This might have been a fine rule for a world in which there were high barriers to publication. The material that was not published was theoretically under an "all rights reserved" regime, but who cared? It was practically inaccessible anyway. After the development of the World Wide Web, all that had changed. Suddenly people and institutions, millions upon millions of them, were putting content online—blogs, photo sites, videologs, podcasts, course materials. It was all just up there. 7
But what could you do with it? You could read it, or look at it, or play it presumably—otherwise why had the author put it up? But could you copy it? Put it on your own site? Include it in a manual used by the whole school district? E-mail it to someone? Translate it into your own language? Quote beyond the boundaries of fair use? Adapt for your own purposes? Take the song and use it for your video? Of course, if you really wanted the work a lot, you could try to contact the author—not always easy. And one by one, we could all contact each other and ask for particular types of permissions for use. If the use was large enough or widespread enough, perhaps we would even think that an individual contract was necessary. Lawyers could be hired and terms hashed out. 8
All this would be fine if the author wished to retain all the rights that copyright gives and grant them only individually, for pay, with lawyers in the room. But what about the authors, the millions upon millions of writers, and photographers and musicians, and filmmakers and bloggers and scholars, who very much want to share their work? The Cora Beth Bridges of the world are never going to write individual letters to the Colin Mutchlers of the world asking for permission to make a derivative work out of "My Life." The person who translated my articles into Spanish or Mandarin, or the people who repost them on their Web sites, or include them in their anthologies might have asked permission if I had not granted it in advance. I doubt though that I would have been contacted by the very talented person who took images from a comic book about fair use that I co-wrote and mashed them up with words from a book by Larry Lessig, and some really nice music from someone none of us had ever met. Without some easy way to give permission in advance, and to do so in a way that human beings and computers, as well as lawyers, can understand, those collaborations will never happen, though all the parties would be delighted if they did. These are losses from "failed sharing"—every bit as real as losses from unauthorized copying, but much less in the public eye. 9
Creative Commons was conceived as a private "hack" to produce a more fine-tuned copyright structure, to replace "all rights reserved" with "some rights reserved" for those who wished to do so. It tried to do for culture what the General Public License had done for software. It made use of the same technologies that had created the issue: the technologies that made fixation of expressive content and its distribution to the world something that people, as well as large concentrations of capital, could do. As a result, it was able to attract a surprising range of support—Jack Valenti of the Motion Picture Association of America and Hillary Rosen of the Recording Industry Association of America, as well as John Perry Barlow of the Grateful Dead, whose attitude toward intellectual property was distinctly less favorable. Why could they all agree? These licenses were not a choice forced on anyone. The author was choosing what to share and under what terms. But that sharing created something different, something new. It was more than a series of isolated actions. The result was the creation of a global "commons" of material that was open to all, provided they adhered to the terms of the licenses. Suddenly it was possible to think of creating a work entirely out of Creative Commons-licensed content—text, photos, movies, music. Your coursebook on music theory, or your documentary on the New York skyline, could combine your own original material with high-quality text, illustrations, photos, video, and music created by strangers. One could imagine entire fields—of open educational content or of open music—in which creators could work without keeping one eye nervously on legal threats or permissions. 10
From one perspective, Creative Commons looks like a simple device for enabling exercise of authorial control, remarkable only for the extremely large number of authors making that choice and the simplicity with which they can do so. From another, it can be seen as re-creating, by private choice and automated licenses, the world of creativity before law had permeated to the finest, most atomic level of science and culture—the world of folk music or 1950s jazz, of jokes and slang and recipes, of Ray Charles's "rewording" of gospel songs, or of Isaac Newton describing himself as "standing on the shoulders of giants" (and not having to pay them royalties). Remember, that is not a world without intellectual property. The cookbook might be copyrighted even if the recipe was not. Folk music makes it to the popular scene and is sold as a copyrighted product. The jazz musician "freezes" a particular version of the improvisation on a communally shared set of musical motifs, records it, and sometimes even claims ownership of it. Newton himself was famously touchy about precedence and attribution, even if not about legal ownership of his ideas. But it is a world in which creativity and innovation proceed on the basis of an extremely large "commons" of material into which it was never imagined that property rights could permeate. 11
For many of us, Creative Commons was conceived of as a second- best solution created by private agreement because the best solution could not be obtained through public law. The best solution would be a return of the formality requirement—a requirement that one at least write the words "James Boyle copyright 2008," for example, in order to get more than 100 years of legal protection backed by "strict liability" and federal criminal law. Those who did not wish to have the legal monopoly could omit the phrase and the work would pass into the public domain, with a period of time during which the author could claim copyright retrospectively if the phrase was omitted by accident. The default position would become freedom and the dead weight losses caused by giving legal monopolies to those who had not asked for them, and did not want them, would disappear. To return to the words of Justice Brandeis that I quoted at the beginning of the book: 12
The general rule of law is, that the noblest of human productions—knowledge, truths ascertained, conceptions, and ideas—become, after voluntary communication to others, free as the air to common use. Upon these incorporeal productions the attribute of property is continued after such communication only in certain classes of cases where public policy has seemed to demand it. 13
Brandeis echoes the Jeffersonian preference for a norm of freedom, with narrowly constrained exceptions only when necessary. That preference means that the commons of which I spoke is a relatively large one—property rights are the exception, not the norm. Of course, many of those who use Creative Commons licenses might disagree with that policy preference and with every idea in this book. They may worship the DMCA or just want a way to get their song or their article out there while retaining some measure of control. That does not matter. The licenses are agnostic. Like a land trust which has a local pro-growth industrialist and a local environmentalist on its board, they permit us to come to a restricted agreement on goals ("make sure this space is available to the public") even when underlying ideologies differ. They do this using those most conservative of tools—property rights and licenses. And yet, if our vision of property is "sole and despotic dominion," these licenses have created something very different—a commons has been made out of private and exclusive rights. 14
My point here is that Creative Commons licenses or the tools of free and open source software—to which I will turn in a moment—represent something more than merely a second-best solution to a poorly chosen rule. They represent a visible example of a type of creativity, of innovation, which has been around for a very long time, but which has reached new salience on the Internet—distributed creativity based around a shared commons of material. 15
FREE AND OPEN SOURCE SOFTWARE 16
In 2007, Clay Shirky, an incisive commentator on networked culture, gave a speech which anyone but a Net aficionado might have found simultaneously romantic and impenetrable. He started by telling the story of a Shinto shrine that has been painstakingly rebuilt to exactly the same plan many times over its 1,300-year life—and which was denied certification as a historic building as a result. Shirky's point? What was remarkable was not the building. It was a community that would continue to build and rebuild the thing for more than a millennium. 17
From there, Shirky shifted to a discussion of his attempt to get AT&T to adopt the high-level programming language Perl—which is released as free and open source software under the General Public License. From its initial creation by Larry Wall in 1987, Perl has been adapted, modified, and developed by an extraordinary range of talented programmers, becoming more powerful and flexible in the process. As Shirky recounts the story, when the AT&T representatives asked "where do you get your support?" Shirky responded, " 'we get our support from a community'—which to them sounded a bit like 'we get our Thursdays from a banana.' " Shirky concluded the speech thus: 18
We have always loved one another. We're human. It's something we're good at. But up until recently, the radius and half-life of that affection has been quite limited. With love alone, you can plan a birthday party. Add coordinating tools and you can write an operating system. In the past, we would do little things for love, but big things required money. Now we can do big things for love.1
19
There are a few people out there for whom "operating systems" and "love" could plausibly coexist in a sentence not constructed by an infinite number of monkeys. For most though, the question is, what could he possibly have meant? 20
The arguments in this book so far have taken as a given the incentives and collective action problems to which intellectual property is a response. Think of Chapter 1 and the economic explanation of "public goods." The fact that it is expensive to do the research to find the right drug, but cheap to manufacture it once it is identified provides a reason to create a legal right of exclusion. In those realms where the innovation would not have happened anyway, the legal right of exclusion gives a power to price above cost, which in turn gives incentives to creators and distributors. So goes the theory. I have discussed the extent to which the logic of enclosure works for the commons of the mind as well as it did for the arable commons, taking into account the effects of an information society and a global Internet. What I have not done is asked whether a global network actually transforms some of our assumptions about how creation happens in a way that reshapes the debate about the need for incentives, at least in certain areas. This, however, is exactly the question that needs to be asked. 21
For anyone interested in the way that networks can enable new collaborative methods of production, the free software movement, and the broader but less political movement that goes under the name of open source software, provide interesting case studies.2 Open source software is released under a series of licenses, the most important being the General Public License (GPL). The GPL specifies that anyone may copy the software, provided the license remains attached and the source code for the software always remains available.3 Users may add to or modify the code, may build on it and incorporate it into their own work, but if they do so, then the new program created is also covered by the GPL. Some people refer to this as the "viral" nature of the license; others find the term offensive.4 The point, however, is that the open quality of the creative enterprise spreads. It is not simply a donation of a program or a work to the public domain, but a continual accretion in which all gain the benefits of the program on pain of agreeing to give their additions and innovations back to the communal project. 22
For the whole structure to work without large-scale centralized coordination, the creation process has to be modular, with units of different sizes and complexities, each requiring slightly different expertise, all of which can be added together to make a grand whole. I can work on the sendmail program, you on the search algorithms. More likely, lots of people try, their efforts are judged by the community, and the best ones are adopted. Under these conditions, this curious mix of Kropotkin and Adam Smith, Richard Dawkins and Richard Stallman, we get distributed production without having to rely on the proprietary exclusion model. The whole enterprise will be much, much, much greater than the sum of the parts. 23
What's more, and this is a truly fascinating twist, when the production process does need more centralized coordination, some governance that guides how the sticky modular bits are put together, it is at least theoretically possible that we can come up with the control system in exactly the same way. In this sense, distributed production is potentially recursive. Governance processes, too, can be assembled through distributed methods on a global network, by people with widely varying motivations, skills, and reserve prices.5 24
The free and open source software movements have produced software that rivals or, some claim, exceeds the capabilities of conventional proprietary, binary-only software.6 Its adoption on the "enterprise level" is impressive, as is the number and enthusiasm of the various technical testaments to its strengths. You have almost certainly used open source software or been its beneficiary. Your favorite Web site or search engine may run on it. If your browser is Firefox, you use it every day. It powers surprising things around you—your ATM or your TiVo. The plane you are flying in may be running it. It just works. 25
Governments have taken notice. The United Kingdom, for example, concluded last year that open source software "will be considered alongside proprietary software and contracts will be awarded on a value-for-money basis." The Office of Government Commerce said open source software is "a viable desktop alternative for the majority of government users" and "can generate significant savings. . . . These trials have proved that open source software is now a real contender alongside proprietary solutions. If commercial companies and other governments are taking it seriously, then so must we."7 Sweden found open source software to be in many cases "equivalent to—or better than—commercial products" and concluded that software procurement "shall evaluate open software as well as commercial solutions, to provide better competition in the market."8 26
What is remarkable is not merely that the software works technically, but that it is an example of widespread, continued, high-quality innovation. The really remarkable thing is that it works socially, as a continuing system, sustained by a network consisting both of volunteers and of individuals employed by companies such as IBM and Google whose software "output" is nevertheless released into the commons. 27
Here, it seems, we have a classic public good: code that can be copied freely and sold or redistributed without paying the creator or creators. This sounds like a tragedy of the commons of the kind that I described in the first three chapters of the book. Obviously, with a nonrival, nonexcludable good like software, this method of production cannot be sustained; there are inadequate incentives to ensure continued production. E pur si muove, as Galileo is apocryphally supposed to have said in the face of Cardinal Bellarmine's certainties: "And yet it moves."9 Or, as Clay Shirky put it, "we get our support from a community." 28
For a fair amount of time, most economists looked at open source software and threw up their hands. From their point of view, "we get our support from a community" did indeed sound like "we get our Thursdays from a banana." There is an old economics joke about the impossibility of finding a twenty-dollar bill lying on a sidewalk. In an efficient market, the money would already have been picked up. (Do not wait for a punch line.) When economists looked at open source software they saw not a single twenty- dollar bill lying implausibly on the sidewalk, but whole bushels of them. Why would anyone work on a project the fruits of which could be appropriated by anyone? Since copyright adheres on fixation—since the computer programmer already has the legal power to exclude others—why would he or she choose to take the extra step of adopting a license that undermined that exclusion? Why would anyone choose to allow others to use and modify the results of their hard work? Why would they care whether the newcomers, in turn, released their contributions back into the commons? 29
The puzzles went beyond the motivations of the people engaging in this particular form of "distributed creativity." How could these implausible contributions be organized? How should we understand this strange form of organization? It is not a company or a government bureaucracy. What could it be? To Richard Epstein, the answer was obvious and pointed to a reason the experiment must inevitably end in failure: 30
The open source movement shares many features with a workers' commune, and is likely to fail for the same reason: it cannot scale up to meet its own successes. To see the long-term difficulty, imagine a commune entirely owned by its original workers who share pro rata in its increases in value. The system might work well in the early days when the workforce remains fixed. But what happens when a given worker wants to quit? Does that worker receive in cash or kind his share of the gain in value during the period of his employment? If not, then the run- up in value during his period of employment will be gobbled up by his successor—a recipe for immense resentment. Yet that danger can be ducked only by creating a capital structure that gives present employees separable interests in either debt or equity in exchange for their contributions to the company. But once that is done, then the worker commune is converted into a traditional company whose shareholders and creditors contain a large fraction of its present and former employers. The bottom line is that idealistic communes cannot last for the long haul.10 31
There are a number of ideas here. First, "idealistic communes cannot last for the long haul." The skepticism about the staying power of idealism sounds plausible today, though there are some relatively prominent counterexamples. The Catholic Church is also a purportedly idealistic institution. It is based on canonical texts that are subject to even more heated arguments about textual interpretation than those which surround the General Public License. It seems to be surviving the long haul quite well. 32
The second reason for doomsaying is provided by the word "commune." The problems Epstein describes are real where tangible property and excludable assets are involved. But is the free and open source community a "commune," holding tangible property in common and excluding the rest of us? Must it worry about how to split up the proceeds if someone leaves because of bad karma? Or is it a community creating and offering to the world the ability to use, for free, nonrival goods that all of us can have, use, and reinterpret as we wish? In that kind of commune, each of us could take all the property the community had created with us when we left and the commune would still be none the poorer. Jefferson was not thinking of software when he talked of the person who lights his taper from mine but does not darken me, but the idea is the same one. Copying software is not like fighting over who owns the scented candles or the VW bus. Does the person who wrote the "kernel" of the operating system resent the person who, much later, writes the code to manage Internet Protocol addresses on a wireless network? Why should he? Now the program does more cool stuff. Both of them can use it. What's to resent? 33
How about idealism? There is indeed a broad debate on the reasons that the system works: Are the motivations those of the gift economy? Is it, as Shirky says, simply the flowering of an innate love that human beings have always had for each other and for sharing, now given new strength by the geographic reach and cooperative techniques the Internet provides? "With love alone, you can plan a birthday party. Add coordinating tools and you can write an operating system." Is this actually a form of potlatch, in which one gains prestige by the extravagance of the resources one "wastes"? Is open source an implicit résumé- builder that pays off in other ways? Is it driven by the species-being, the innate human love of creation that continually drives us to create new things even when homo economicus would be at home in bed, mumbling about public goods problems?11 34
Yochai Benkler and I would argue that these questions are fun to debate but ultimately irrelevant.12 Assume a random distribution of incentive structures in different people, a global network—transmission, information sharing, and copying costs that approach zero—and a modular creation process. With these assumptions, it just does not matter why they do it. In lots of cases, they will do it. One person works for love of the species, another in the hope of a better job, a third for the joy of solving puzzles, and a fourth because he has to solve a particular problem anyway for his own job and loses nothing by making his hack available for all. Each person has their own reserve price, the point at which they say, "Now I will turn off Survivor and go and create something." But on a global network, there are a lot of people, and with numbers that big and information overhead that small, even relatively hard projects will attract motivated and skilled people whose particular reserve price has been crossed. 35
More conventionally, many people write free software because they are paid to do so. Amazingly, IBM now earns more from what it calls "Linux-related revenues" than it does from traditional patent licensing, and IBM is the largest patent holder in the world.13 It has decided that the availability of an open platform, to which many firms and individuals contribute, will actually allow it to sell more of its services, and, for that matter, its hardware. A large group of other companies seem to agree. They like the idea of basing their services, hardware, and added value on a widely adopted "commons." This does not seem like a community in decline. 36
People used to say that collaborative creation could never produce a quality product. That has been shown to be false. So now they say that collaborative creation cannot be sustained because the governance mechanisms will not survive the success of the project. Professor Epstein conjures up a "central committee" from which insiders will be unable to cash out—a nice mixture of communist and capitalist metaphors. All governance systems—including democracies and corporate boards—have problems. But so far as we can tell, those who are influential in the free software and open source governance communities (there is, alas, no "central committee") feel that they are doing very well indeed. In the last resort, when they disagree with decisions that are taken, there is always the possibility of "forking the code," introducing a change to the software that not everyone agrees with, and then letting free choice and market selection converge on the preferred iteration. The free software ecosystem also exhibits diversity. Systems based on GNU-Linux, for example, have distinct "flavors" with names like Ubuntu, Debian, and Slackware, each with passionate adherents and each optimized for a particular concern—beauty, ease of use, technical manipulability. So far, the tradition of "rough consensus and running code" seems to be proving itself empirically as a robust governance system. 37
Why on earth should we care? People have come up with a surprising way to create software. So what? There are at least three reasons we might care. First, it teaches us something about the limitations of conventional economics and the counterintuitive business methods that thrive on networks. Second, it might offer a new tool in our attempt to solve a variety of social problems. Third, and most speculative, it hints at the way that a global communications network can sometimes help move the line between work and play, professional and amateur, individual and community creation, rote production and compensated "hobby." 38
We should pay attention to open source software because it shows us something about business methods in the digital world—indeed in the entire world of "information-based" products, which is coming to include biotechnology. The scale of your network matters. The larger the number of people who use your operating system, make programs for your type of computer, create new levels for your game, or use your device, the better off you are. A single fax machine is a paperweight. Two make up a communications link. Ten million and you have a ubiquitous communications network into which your "paperweight" is now a hugely valuable doorway. 39
This is the strange characteristic of networked goods. The actions of strangers dramatically increase or decrease the usefulness of your good. At each stage the decision of someone else to buy a fax machine increases the value of mine. If I am eating an apple, I am indifferent about whether you are too. But if I have a fax machine then my welfare is actually improved by the decisions of strangers to buy one. The same process works in reverse. Buy a word processing program that becomes unpopular, get "locked in" to using it, and find yourself unable to exchange your work easily with others. Networks matter and increasing the size of the networks continues to add benefits to the individual members. 40
What's true for the users of networks is doubly so for the producers of the goods that create them. From the perspective of a producer of a good that shows strong network effects such as a word processing program or an operating system, the optimal position is to be the company that owns and controls the dominant product on the market. The ownership and control is probably by means of intellectual property rights, which are, after all, the type of property rights one finds on networks. The value of that property depends on those positive and negative network effects. This is the reason Microsoft is worth so much money. The immense investment in time, familiarity, legacy documents, and training that Windows or Word users have provides a strong incentive not to change products. The fact that other users are similarly constrained makes it difficult to manage any change. Even if I change word processor formats and go through the trouble to convert all my documents, I still need to exchange files with you, who are similarly constrained. From a monopolist's point of view, the handcuffs of network effects are indeed golden, though opinions differ about whether or not this is a cause for antitrust action. 41
But if the position that yields the most revenue is that of a monopolist exercising total control, the second-best position may well be that of a company contributing to a large and widely used network based on open standards and, perhaps, open software. The companies that contribute to open source do not have the ability to exercise monopoly control, the right to extract every last cent of value from it. But they do have a different advantage; they get the benefit of all the contributions to the system without having to pay for them. The person who improves an open source program may not work for IBM or Red Hat, but those companies benefit from her addition, just as she does from theirs. The system is designed to continue growing, adding more contributions back into the commons. The users get the benefit of an ever-enlarging network, while the openness of the material diminishes the lock-in effects. Lacking the ability to extract payment for the network good itself—the operating system, say—the companies that participate typically get paid for providing tied goods and services, the value of which increases as the network does. 42
I write a column for the Financial Times, but I lack the fervor of the true enthusiast in the "Great Game of Markets." By themselves, counterintuitive business methods do not make my antennae tingle. But as Larry Lessig and Yochai Benkler have argued, this is something more than just another business method. They point us to the dramatic role that openness—whether in network architecture, software, or content—has had in the success of the Internet. What is going on here is actually a remarkable corrective to the simplistic notion of the tragedy of the commons, a corrective to the Internet Threat storyline and to the dynamics of the second enclosure movement. This commons creates and sustains value, and allows firms and individuals to benefit from it, without depleting the value already created. To appropriate a phrase from Carol Rose, open source teaches us about the comedy of the commons, a way of arranging markets and production that we, with our experience rooted in physical property and its typical characteristics, at first find counterintuitive and bizarre. Which brings us to the next question for open source. Can we use its techniques to solve problems beyond the world of software production? 43