DOKK Library

The Internet is a semicommons

Authors James Grimmelmann

License CC-BY-3.0

Plaintext
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                          4/29/2010 11:26 PM




            THE INTERNET IS A SEMICOMMONS
                                  James Grimmelmann*

                                     I. INTRODUCTION
   As my contribution to this Symposium on David Post’s In Search of
Jefferson’s Moose1 and Jonathan Zittrain’s The Future of the Internet,2 I’d
like to take up a question with which both books are obsessed: what makes
the Internet work? Post’s answer is that the Internet is uniquely
Jeffersonian; it embodies a civic ideal of bottom-up democracy3 and an
intellectual ideal of generous curiosity.4 Zittrain’s answer is that the
Internet is uniquely generative; it enables its users to experiment with new
uses and then share their innovations with each other.5 Both books tell a
story about how the combination of individual freedom and a cooperative
ethos have driven the Internet’s astonishing growth.
   In that spirit, I’d like to suggest a third reason that the Internet works: it
gets the property boundaries right. Specifically, I see the Internet as a
particularly striking example of what property theorist Henry Smith has
named a semicommons.6 It mixes private property in individual computers
and network links with a commons in the communications that flow


* Associate Professor, New York Law School. My thanks for their comments to Jack
Balkin, Shyam Balganesh, Aislinn Black, Anne Chen, Matt Haughey, Amy Kapczynski,
David Krinsky, Jonathon Penney, Chris Riley, Henry Smith, Jessamyn West, and Steven
Wu. I presented earlier versions of this essay at the Commons Theory Workshop for Young
Scholars (Max Planck Institute for the Study of Collective Goods), the 2007 IP Scholars
conference, the 2007 Telecommunications Policy Research Conference, and the December
2009 Symposium at Fordham Law School on David Post’s and Jonathan Zittrain’s books.
This essay may be freely reused under the terms of the Creative Commons Attribution 3.0
United States license, http://creativecommons.org/licenses/by/3.0/us/.
     1. DAVID G. POST, IN SEARCH OF JEFFERSON’S MOOSE: NOTES ON THE STATE OF
CYBERSPACE (2009).
     2. JONATHAN ZITTRAIN, THE FUTURE OF THE INTERNET—AND HOW TO STOP IT (2008).
     3. See POST, supra note 1, at 116 (“How do you build a democratic system that would
scale, that would get stronger as it got bigger, and bigger as it got stronger? . . . But like the
American West of 1787, cyberspace is (or at least it has been) a Jeffersonian kind of
place.”).
     4. See id. at 202 (“The perfect Jeffersonian world, then, is one that has as much
protection for speech as it can have, but only as much protection for intellectual property as
it needs. Sounds like cyberspace!”).
     5. See ZITTRAIN, supra note 2, at 71–74 (defining and analyzing generativity); id. at
149 (“This book has explained how the Internet’s generative characteristics primed it for
extraordinary success . . . .”).
     6. See generally Henry E. Smith, Semicommon Property Rights and Scattering in the
Open Fields, 29 J. LEGAL STUD. 131 (2000) (introducing semicommons concept).

                                             2799
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                   4/29/2010 11:26 PM




2800                        FORDHAM LAW REVIEW                                  [Vol. 78

through the network.7 Both private and common uses are essential.
Without the private aspects, the Internet would collapse from overuse and
abuse; without the common ones, it would be pointlessly barren. But the
two together are magical; their combination makes the Internet hum.
   Semicommons theory also tells us, however, that we should expect
difficult tensions between these two very different ways of managing
resources.8 Because private control and open-to-all-comers common access
necessarily coexist on the Internet, it has had to develop distinctive
institutions to make them play nicely together. These institutions include
the technical features and community norms that play central roles in Post’s
and Zittrain’s books: everything from the layered architecture of the
Internet’s protocols9 to Wikipedia editors’ efforts to model good behavior.10
As I’ll argue, the dynamic interplay between private and common isn’t just
responsible for the Internet’s success; it also explains some enduring
tensions in Internet law, reveals the critical importance of some of the
Internet’s design decisions, and provides a fresh perspective on the themes
of freedom and collaboration that Post and Zittrain explore.
   Here’s how I’ll proceed: In Part II of this essay, I’ll set up the problem.
Part II.A will use Post’s and Zittrain’s books to describe two critical facts
about the Internet—it’s designed and used in ways that require substantial
sharing and openness, and it’s sublimely gigantic. Part II.B will explain
why this openness is problematic for traditional property theory, which sees
resources held in common as inherently wasteful. Part II.C will explain
how commons theory can make sense of commonly held resources, but
only at the price of introducing a new problem: an internal tension about
the scale at which these resources should be held. The theory of tangible
common-pool resources tells a tragic story that emphasizes the need to keep
the group of those with access to the commons small. But the theory of
peer-produced intellectual property tells a happier tale, one that emphasizes
the importance of massive collaboration—of openness to as many people as
possible.
   In Part III, I’ll resolve this tension between pressures for smallness and
pressures for bigness by showing how a semicommons can accommodate
both. Part III.A will introduce Henry Smith’s theory of the semicommons,
which he illustrates with the example of fields open to common grazing for
sheep but held in private for farming. Part III.B will explain how treating
the Internet as a semicommons elegantly transforms the small-and-private

     7. Id. at 131.
     8. See Smith, supra note 6, at 145 (“[T]he open-field system is a mixture of common
and private ownership, and the question is, why not one or the other?”); infra Part III.
     9. See POST, supra note 1, at 80–86 (discussing layering); ZITTRAIN, supra note 2, at
67–69 (“Layers facilitate polyarchies . . . .”); infra Part IV.A.
    10. See ZITTRAIN, supra note 2, at 142–43, 147–48 (discussing “netizenship” and
“personal commitments” of Wikipedia editors); infra Part IV.B. See generally ANDREW
DALBY, THE WORLD AND WIKIPEDIA: HOW WE ARE EDITING REALITY (2009) (describing the
history and norms of Wikipedia in detail).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2801

versus large-and-common antithesis into a compelling synthesis.
Simultaneously treating network elements as private property and the
“network” as a commons captures the distinctive benefits of both resource
models. And in Part III.C I’ll briefly describe how semicommons theory is
implicit in Zittrain’s argument.
   In Part IV, I’ll demonstrate the analytical power of this way of looking at
the Internet—in particular, how it makes sense out of a wide range of
technical and social institutions commonly seen online. Part IV.A will
illustrate the importance of layering in enabling uses of the Internet at
different scales to coexist. Part IV.B will discuss how user-generated
content (UGC) sites solve semicommons governance problems. And Part
IV.C will consider the role of boundary-setting in the failure of Usenet and
the success of e-mail. Finally, in Part V, I’ll briefly argue that
semicommons theory usefully helps us focus on the interdependence
between private and common, rather than seeing them as implacable
opposites.

                   II. PROPERTY AND THE PROBLEM OF SCALE
   David Post and Jonathan Zittrain both link the Internet’s extraordinary
growth to its extraordinary openness. You don’t need to ask anyone’s
official permission to create a new community or a new application online.
Result: more freedom and more innovation, enabling the Internet to
outcompete proprietary, controlled networks.11
   But this openness, which draws on property-theoretic ideas about
sustainable commons, comes with its own theoretical puzzle. Big things
have a tendency to collapse under their own weight, and the Internet is
nothing if not big.12 The conventional wisdom in property circles is that a
commons in any finite resource becomes increasingly untenable as its scale
increases.13 The intellectual “commons” that many intellectual property
scholars celebrate escapes this trap because (and only because) information
isn’t used up when it’s shared.14 That tells us why writers and musicians
and inventors and programmers can benefit from robust sharing and a rich
public domain, but it doesn’t seem to be directly relevant to the underlying
question of why the Internet didn’t flame out spectacularly several orders of
magnitude ago, as users took advantage of its openness to use up its
available capacity.
   This part will articulate, in somewhat more detail, the nature of this
theoretical tension between openness and size on the Internet. Part II.A will

    11. See POST, supra note 1, at 103 (“Perhaps it was a coincidence that the network that
became ‘the Internet’ was the one that operated this way . . . . I doubt it, though.”);
ZITTRAIN, supra note 2, at 30 (“The bundled proprietary model . . . had been defeated by the
Internet model . . . .”).
    12. See infra Part II.A.
    13. See infra Part II.B.
    14. See infra Part II.C.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2802                        FORDHAM LAW REVIEW                                   [Vol. 78

discuss the problem of scale, using Post’s musings on Jefferson as the point
of departure. Part II.B will tell what I call the “Tragic” story within
commons theory—that a commons can be a sustainable alternative to
private property or direct regulation, but only for small-scale resources.
Part II.C will tell a different story, which I call the “Comedic” one: that for
nonrival information goods, where exhaustion isn’t an issue, unrestricted
sharing can have benefits far outweighing costs.

                   A. On Being the Right Size (for an Internet)
   In Search of Jefferson’s Moose takes its title, its cover art, and its central
metaphor from the stuffed moose that Thomas Jefferson proudly displayed
in Paris in 1787.15 Jefferson was serving as the United States’ official
representative in France, and he saw himself as a cultural and intellectual
ambassador, not just a political one. The French naturalist George Buffon
had written that New World animals were smaller than their Old World
counterparts, owing to the defectively cold and wet American climate.16
While Jefferson’s Notes on the State of Virginia attempted to refute this
analysis with facts and figures, the moose offered a more demonstrative
proof that American species could stand tall with the best that Europe had
to offer.17
   In Post’s telling, the political subtext is hard to miss. Jefferson’s moose
was big, standing seven feet tall; it was novel, existing only on the North
American continent;18 and it was robust, an example of the rude good
health of North American wildlife.19 It was, in short, a metaphor for the
newly formed United States, another product of North America.
Contemporary political theory considered large republics inherently
unstable, and the United States was the largest republic in human history.20
Jefferson’s moose was meant to “dazzle” his visitors into what Post calls an
“‘Aha!’ moment” of belief—that creatures of its size could thrive in the
New World, and so could the equally large American republic.21
   Jefferson’s metaphor for the United States thus becomes Post’s metaphor
for the Internet. He’s looking for a way to dazzle his readers into their own


   15. POST, supra note 1, at 16–18.
   16. Id. at 63–64.
   17. See id. at 63–68 (citing THOMAS JEFFERSON, NOTES ON THE STATE OF VIRGINIA
(Frank Shuffelton ed., Penguin Books 1999) (1785)).
   18. Or so Jefferson thought when he referred to the moose as a “species not existing in
Europe.” He was wrong; Alces alces also thrives in Scandinavia and Russia. See VICTOR
VAN BALLENBERGHE, IN THE COMPANY OF MOOSE 1 (2004) (“Simply put, moose are giant
deer that live in the northern forests of Europe, Asia, and North America.”). See generally
LEE ALAN DUGATKIN, MR. JEFFERSON AND THE GIANT MOOSE: NATURAL HISTORY IN EARLY
AMERICA 81–100 (2009) (describing Jefferson’s search for a moose specimen and his not-
always-reliable research methods).
   19. POST, supra note 1, at 63–68.
   20. See id. at 110–16.
   21. Id. at 209.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                     4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                     2803

“Aha!” moments about it. Post wants his readers to believe that it really is
there, that it really is something new, and that it really does work. Just as
Jefferson’s moose was meant to impress visitors with its scale, the first half
of Jefferson’s Moose is meant to impress readers with the Internet’s scale.
   This point bears emphasis. The Internet is sublimely large; in
comparison with it, all other human activity is small. It has more than a
billion users,22 who’ve created over two hundred million websites23 with
more than a trillion different URLs,24 and send over a hundred billion e-
mails a day.25 American Internet users consumed about ten exabytes of
video and text in 2008—that’s 10,000,000,000,000,000,000 bytes, give or
take a few.26 Watching all the videos uploaded to YouTube alone in a
single day would be a full-time job—for fifteen years.27 The numbers are
incomprehensibly big, and so is the Internet.28
   These statistics tell us beyond peradventure that the Internet has been
wildly successful, but they don’t by themselves tell us why. Post’s answer
is that the Internet is built in a uniquely Jeffersonian way. Technologically,
it depends on bottom-up, self-organized routing.29 Its political and social
structures, as well, are self-organized in a bottom-up fashion, with decisions
made by local groups on the basis of consensus and voluntary association.30
These features are also characteristic of Jefferson’s ideal democratic
republic, making the Internet the truest realization yet of his political
vision.31 People choose to go to the Internet for the same reasons Jefferson



    22. See Press Release, comScore, Inc., Global Internet Audience Surpasses One Billion
Visitors, According to comScore (Jan. 23, 2009), http://www.comscore.com/Press_Events/
Press_Releases/2009/1/Global_Internet_Audience_1_Billion/%28language%29/eng-US.
    23. Netcraft, December 2009 Web Server Survey, http://news.netcraft.com/archives/
2009/12/24/december_2009_web_server_survey.html (last visited Apr. 10, 2010); see also
The VeriSign Domain Report, DOMAIN NAME INDUSTRY BRIEF (VeriSign, Mountain View,
Cal.), Dec. 2009, at 2, available at http://www.verisign.com/domain-name-services/domain-
information-center/domain-name-resources/domain-name-report-dec09.pdf          (over      187
million registered domains).
    24. See Posting of Jesse Alpert & Nissan Hajaj to Official Google Blog,
http://googleblog.blogspot.com/2008/07/we-knew-web-was-big.html (July 25, 2008, 10:12
PDT).
    25. CISCO SYS., INC., CISCO 2008 ANNUAL SECURITY REPORT 13 (2009), available at
http://www.cisco.com/en/US/prod/collateral/vpndevc/securityreview12-2.pdf.
    26. ROGER E. BOHN & JAMES E. SHORT, HOW MUCH INFORMATION? 2009: REPORT ON
AMERICAN CONSUMERS app. B, at 32 (2009), available at http://hmi.ucsd.edu/pdf/
HMI_2009_ConsumerReport_Dec9_2009.pdf.
    27. See YouTube, YouTube Fact Sheet, http://www.youtube.com/t/fact_sheet (last
visited Apr. 10, 2010) (reporting twenty hours of video uploaded per minute). My
calculation assumes a forty-hour workweek. If you didn’t stop to sleep or eat, you could
watch a day’s worth of YouTube videos in only three years and a few months.
    28. Cf. James Grimmelmann, Information Policy for the Library of Babel, 3 J. BUS. &
TECH. L. 29, 38–40 (2008) (comparing the Internet to Borges’s infinite Library of Babel).
    29. See POST, supra note 1, at 80–99.
    30. See id. at 126–41.
    31. See id. at 107–17.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2804                         FORDHAM LAW REVIEW                                     [Vol. 78

expected them to settle the American West: to build new lives and new
communities for themselves on a firm foundation of liberty.32
   Zittrain’s theory of the Internet’s success is that it’s a generative system,
open to unfiltered contributions from anyone and everyone.33 The Internet
lets its users innovate and share their innovations with each other without
being thwarted by gatekeepers who can veto proposed changes and system
designers who can make change impossible in the first place.34 This greater
openness to unanticipated developments gives the Internet a powerful
flexibility: it can draw on the best of what all its users have come up
with.35 Like Post’s, this is a bottom-up story: these new protocols,
technologies, and communities are being assembled by individuals, rather
than being dictated from on high.
   Property theory has a word for this form of resource management:
commons. Post focuses on the self-assembly inherent in the Internet
Protocol (IP)36 and on self-governance,37 while Zittrain focuses on
technical innovation38 and norm creation,39 but these are very much the
same story. The Internet’s users are individually empowered to use the
network as they see fit. There’s no Internet Tycoon who owns the whole
thing and can kick everyone else off; there’s no Internet Czar who sets the
rules for everyone else. That makes the Internet, on this view, a nearly
ideal commons: a resource that everyone has a privilege to use and no one
has a right to control.
   So far, so good. But there’s a reason that Post calls it the “problem” of
scale.40 Size is more than just proof of success; it also creates new and
distinctive problems of its own. The biological metaphor is helpful.
Following Haldane’s classic On Being the Right Size, Post writes, “Large
organisms are not and cannot be simply small organisms blown up to larger
size.”41 A moose blown up by a factor of ten, to be seventy feet tall instead

    32. See id. at 116–17, 172–78 (discussing Jefferson’s vision of the settlement of the
American West).
    33. See ZITTRAIN, supra note 2, at 19–35; see also Jonathan L. Zittrain, The Generative
Internet, 119 HARV. L. REV. 1974 (2006).
    34. ZITTRAIN, supra note 2, at 80–90.
    35. See id. at 31 (discussing “procrastination principle” of deferring decisions by leaving
architecture open initially); cf. Carliss Y. Baldwin & Kim B. Clark, The Architecture of
Participation: Does Code Architecture Mitigate Free Riding in the Open Source
Development Model?, 52 MGMT. SCI. 1116 (2006) (discussing importance of “option values”
in open source software projects).
    36. POST, supra note 1, at 73–99.
    37. Id. at 133–41, 178–86.
    38. ZITTRAIN, supra note 2, at 80–90.
    39. Id. at 90–96, 134–35, 141–48, 223–25.
    40. See, e.g., POST, supra note 1, at 60 (titling one chapter, “Jefferson’s Moose and the
Problem of Scale I”).
    41. Id. at 61. Haldane’s essay itself links biological and political scale, concluding with
a passage on the maximum size of a democratic state (increasing with technological change)
and the impossibility of socialist governance of truly large states. J.B.S. Haldane, On Being
the Right Size, 152 HARPER’S MAG. 424, 427 (1926) (“I find it no easier to picture a
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                      2805

of seven, would have one thousand times the body mass but only one
hundred times the bone cross section. It would quite literally collapse under
its own weight.
   The same is true of computer networks: some architectures that work
well with one hundred users fail embarrassingly with one million—or one
billion. Post gives another back-of-the-envelope demonstration to show
that a centrally operated Internet could not possibly have a strong enough
skeleton42 to support all of the communications between its billion-plus
users.43 Thus, the decentralization and ad hoc ethos that Post and Zittrain
celebrate about the Internet are technological necessities. It took packet
switching and distributed best-efforts routing to make a global network on
the scale of the Internet feasible.
   Property theory has its own problem of scale. If Buffon thought that
nature made large New World wildlife impossible, and if political theorists
thought human nature made large republics impossible, and if network
engineers thought that physics made large decentralized networks
impossible, then property theorists have long thought that large commons
were self-defeating.44 Anything held in common would be overused or
underproduced, and the larger the relevant community, the more severe the
problem.45 As the Internet asymptotically approaches the whole of human
experience, it would seem that its usability ought to be trending toward a
limit of zero. The benefits of openness are clear, but so are the immense
costs of wasteful and self-interested overuse. Since the Internet, like
Jefferson’s moose or the aerodynamically unlikely bumblebee that Zittrain
uses as a metaphor for Wikipedia,46 unarguably is, this success requires
explanation. To find one, we will need to delve deeper into property
theory.




completely socialized British Empire or United States than an elephant turning somersaults
or a hippopotamus jumping a hedge.”). Had Haldane been exposed to the Internet, he might
have noted that it has an inordinate fondness for pictures of cats.
    42. The major networks that carry the heaviest volumes of Internet traffic are referred to
as “backbones.” See, e.g., ZITTRAIN, supra note 2, at 158. Note the use of the plural.
    43. See POST, supra note 1, at 68–79.
    44. The intuition, as Henry Smith notes, goes back to Aristotle. See Henry E. Smith,
Governing Water: The Semicommons of Fluid Property Rights, 50 ARIZ. L. REV. 445, 451
n.25 (2008) (citing ARISTOTLE, THE POLITICS AND THE CONSTITUTION OF ATHENS 33 (Stephen
Everson ed., Benjamin Jowett trans., 1996)).
    45. See infra Part II.B.
    46. ZITTRAIN, supra note 2, at 148 (“Wikipedia is the canonical bee that flies despite
scientists’ skepticism that the aerodynamics add up.” (citing YOCHAI BENKLER, THE WEALTH
OF NETWORKS: HOW SOCIAL PRODUCTION TRANSFORMS MARKETS AND FREEDOM 76–80
(2006))). In his conclusion, Post applies his own biological metaphor to Wikipedia, writing
that Wikipedia might well be “a pretty good moose, something we could bring with us . . . to
show to people of the Old World.” POST, supra note 1, at 209.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                   4/29/2010 11:26 PM




2806                        FORDHAM LAW REVIEW                                  [Vol. 78

                     B. The Tragedy of the (Rival) Commons
  In order to make sense of the Internet as a species of commons, we first
need to situate the commons within property theory. Our starting point is
the standard distinction between two kinds of resources: private goods and
pure public goods.47
  Private goods such as cars, farms, and handbags have two key
characteristics. First, they’re rival: one person’s use of the good makes it
unavailable for someone else. Only one person at a time can carry a
handbag or plant the same furrow. Second, they’re excludable: it’s
possible to prevent people from using the good. We can hold tightly to
handbags and put fences around farms. These distinctions are illustrated in
the following, wholly conventional figure:

                              Rival                     Nonrival
 Excludable                   Private                   Toll
 Nonexcludable                Common                    Public

   For rival goods, excludability has three salutary effects. First, it helps
prevent wasteful underuse. If you own a resource but my proposed use is
higher value than yours, it will be profitable for both of us for you to sell it
to me.48 Second, it prevents dissipation of the resource’s value as we fight
over it; without excludability I could simply take the resource from you and
you could take it back, ad nauseam. Third, it promotes efficient investment:
companies will invest in creating or improving resources if they can also
reap the gains from the increase in value.49
   Things are trickier when excludability fails. Resources that are rival but
not excludable are common goods (in the bottom-left quadrant of the
diagram). Common goods are subject to a wasteful race that Garrett Hardin
termed the “tragedy of the commons” in his influential 1968 article.50 As
he argued,
     Adding together the component partial utilities, the rational herdsman
     concludes that the only sensible course for him to pursue is to add another
     animal to his herd. And another; and another. . . . But this is the
     conclusion reached by each and every rational herdsman sharing a

    47. See generally RICHARD CORNES & TODD SANDLER, THE THEORY OF EXTERNALITIES,
PUBLIC GOODS, AND CLUB GOODS 8–13 (1986); Charlotte Hess & Elinor Ostrom, Ideas,
Artifacts, and Facilities: Information as a Common-Pool Resource, LAW & CONTEMP.
PROBS., Winter/Spring 2003, at 111, 119–21.
    48. See Harold Demsetz, Toward a Theory of Property Rights, 57 AM. ECON. REV. 347,
354–55 (1967). Excludability, by making it possible to “own” a resource, thus makes it
possible to transact over it. See Steven N.S. Cheung, The Structure of a Contract and the
Theory of a Non-exclusive Resource, 13 J.L. & ECON. 49, 64–67 (1970).
    49. See, e.g., Harold Demsetz, The Public Production of Private Goods, 13 J.L. & ECON.
293, 293–94 (1970) (giving example involving slaughterhouse that supplies both meat and
leather).
    50. Garrett Hardin, The Tragedy of the Commons, 162 SCIENCE 1243 (1968).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                 2807

     commons. Therein is the tragedy. Each man is locked into a system that
     compels him to increase his herd without limit—in a world that is limited.
     Ruin is the destination toward which all men rush, each pursuing his own
     best interest in a society that believes in the freedom of the commons.
     Freedom in a commons brings ruin to all.51
   The tragedy flows from the lack of excludability. The herdsmen race to
graze more sheep because no one can stop them, and they know that no one
will stop the others. If excludability could be restored and the sheep could
somehow be kept off the pasture, the race would be terminated before it got
out of hand.52 As Hardin put it, “the necessity of abandoning the
commons”53 would require “mutual coercion, mutually agreed upon”54 to
establish effective restrictions on overuse.
   Hardin assumed that these restrictions could take one of two forms:
“private property” or governmental “allocation.”55 A sole private owner
absorbs the full costs and benefits of using the pasture and therefore will
choose the right number of sheep to graze.56 A government regulator, on
the other hand, could allow multiple shepherds access, but limit the number
of sheep each may graze so that the total comes out right. Either way, the
key is to recreate excludability.57 On our diagram, these are moves from
the bottom-left quadrant to the top-left.
   Henry Smith has generalized this dichotomy by recasting it as a division
between two organizational forms: exclusion and governance.58 Exclusion,
which corresponds to private property, involves giving one designated
gatekeeper complete control over the resource.59 In a governance regime,
on the other hand, multiple users are allowed to use the resource, but are
subject to rules specifying how, when, and in what ways.60 An exclusion
regime puts a fence around the pasture and gives one person the key; a
governance regime brands all the sheep and limits the number each person
may graze.        The exclusion/governance distinction focuses on the



   51. Id. at 1244 (omission in original).
   52. Others had made a similar observation about another scarce natural resource that
functioned as a commons: fisheries. See Peder Anderson, “On Rent of Fishing Grounds”:
A Translation of Jens Warming’s 1911 Article, with an Introduction, 15 HIST. POL. ECON.
391 (1983); H. Scott Gordon, The Economic Theory of a Common-Property Resource: The
Fishery, 62 J. POL. ECON. 124 (1954); see also Cheung, supra note 48.
   53. Hardin, supra note 50, at 1248.
   54. Id. at 1247.
   55. Id. at 1245.
   56. See Demsetz, supra note 48, at 355.
   57. See CORNES & SANDLER, supra note 47, at 43; Hardin, supra note 50, at 1247
(“Consider bank-robbing. The man who takes money from a bank acts as if the bank were a
commons. How do we prevent such action?” (emphasis added)).
   58. Henry E. Smith, Exclusion Versus Governance: Two Strategies for Delineating
Property Rights, 31 J. LEGAL STUD. S453, S454–55 (2002).
   59. Id. at S454.
   60. Id. at S455.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                           4/29/2010 11:26 PM




2808                        FORDHAM LAW REVIEW                          [Vol. 78

institutional characteristics that matter, rather than on the formal label of
“property” or “regulation.”
   Exclusion and governance both depend on a source of authority to
enforce the rules that recreate excludability. To use Hardin’s term,
“coercion” was essential.61 But, as scholars of the commons have
recognized, that coercion need not come from above, in the form of the
state.62 It could also come from below, from the other users of the resource
themselves.
   Elinor Ostrom’s work shows that many communities have successfully
managed common resources.63 Her list of successes includes Spanish
irrigation ditches, Japanese forests, and even Swiss grazing meadows—
Hardin’s signature example of failure, turned on its head. These
communities have created and then enforced on themselves a governance
regime controlling use of their common resource. These bottom-up
institutions are neither archaic holdovers nor illusory bulwarks; under the
right circumstances, common ownership can thrive for hundreds of years.
   It doesn’t always work, though. Many common ownership regimes have
succeeded, but many others have failed.64 The question thus becomes, what
distinguishes the commons that work from the ones that suffer Hardin’s
tragic fate? Ostrom and others give lists of the core factors that make a
commons sustainable, including good institutions to gather information
about the resource, forums to discuss its management, graduated sanctions
to punish misusers, and community participation in making and enforcing
the rules.65
   In this tradition, one factor stands out as essential to the success of a
commons: community coherence. Mancur Olson’s theory of collective
action argues that small groups can better coordinate their actions than large
ones.66 Ostrom emphasizes the importance of well-defined boundaries, not
just around the resource, but around the community too.67 Robert
Ellickson’s study of Shasta County ranchers found their norm-based self-
regulation worked because they formed a “close-knit group.”68 These
communities can transform a commons into a common-pool resource:69
they supplement their internal governance regime with exclusion towards



   61.  Hardin, supra note 50, at 1247.
   62.  See ELINOR OSTROM, GOVERNING THE COMMONS 13–15 (1990).
   63.  Id. at 51–88.
   64.  See id. at 143–78 (discussing “institutional failures and fragilities”).
   65.  See id. at 88–102.
   66.  MANCUR OLSON, THE LOGIC OF COLLECTIVE ACTION: PUBLIC GOODS AND THE
THEORY OF GROUPS 22–36 (1971).
   67. See OSTROM, supra note 62, at 91–92.
   68. ROBERT C. ELLICKSON, ORDER WITHOUT LAW: HOW NEIGHBORS SETTLE DISPUTES
15–40, 177–82 (1991).
   69. See Hess & Ostrom, supra note 47, at 120.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2809

outsiders.70 Only a small, well-defined, tightly knit group can recognize
outsiders to keep them away, monitor and act against its own members with
the necessary intensity, and have a sufficiently strong incentive to bother.
   Size, in other words, ought to be fatal to commons self-management. All
else being equal, a small and coherent community is more likely to succeed
at running a commons; a large and diffuse one is more likely to botch the
job. The bigger the group, the greater the tendency towards ruin.
   Let’s call this strain of commons theory the Tragic story. Its basic lesson
is Hardin’s: commonly held resources are vulnerable to self-interested
overuse. That fate can be staved off through a variety of arrangements,
including private property, governmental regulation, or common self-
management. This last institutional form requires community members to
craft their own rules of appropriate use, monitor each other’s behavior, and
punish violators.71 This is a fragile enterprise; only strong and well-defined
communities will be able to sustain the constant work of self-control
required. Success requires closing the commons off to outsiders; to throw
the community open to them is to court disaster.
   This Tragic story has been frequently told about the Internet.
Telecommunications analysts predict an impending bandwidth crunch, as
users deplete the limited supply of available connectivity.72
Telecommunications companies complain that selfish bandwidth hogs are
destroying the Internet experience for other customers.73 Scholars warn
that weak incentives to cooperate make Wikipedia unsustainable74—or
perhaps online sharing more generally is doomed.75 The Tragic story




    70. This arrangement is also described in the literature as a “limited-access commons,”
see, e.g., Smith, supra note 58, at S458, or “limited common property,” see, e.g., Carol M.
Rose, The Several Futures of Property: Of Cyberspace and Folk Tales, Emission Trades
and Ecosystems, 83 MINN. L. REV. 129, 132 (1998).
    71. OSTROM, supra note 62, at 42–45, 92–100.
    72. See, e.g., BROADBAND WORKING GROUP, MASS. INST. OF TECH. COMMC’N FUTURES
PROGRAM, THE BROADBAND INCENTIVE PROBLEM 2 (2005), available at http://cfp.mit.edu/
docs/incentive-wp-sept2005.pdf (“The broadband value chain is headed for a train
wreck. . . . The broadband locomotive left the station with a critical missing piece: the
incentive for network operators to support many of the bandwidth-intensive innovations
planned by upstream industries and users.”); Bret Swanson, The Coming Exaflood, WALL ST.
J., Jan. 20, 2007, at A11 (“Today’s networks are not remotely prepared to handle this
exaflood.”).
    73. See, e.g., Kim Hart, Shutting Down Big Downloaders, WASH. POST, Sept. 7, 2007, at
A1 (“Comcast has punished some transgressors by cutting off their Internet service, arguing
that excessive downloaders hog Internet capacity and slow down the network for other
customers.”).
    74. See Eric Goldman, Wikipedia’s Labor Squeeze and Its Consequences, 8 J. ON
TELECOMM. & HIGH TECH. L. 157, 159–61 & n.12 (2010).
    75. Lior Jacob Strahilevitz, Wealth Without Markets?, 116 YALE L.J. 1472, 1493–504
(2007) (reviewing BENKLER, supra note 46) (“Taken together, these challenges are daunting,
and they might push social production to the peripheries of the new economy.”).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2810                         FORDHAM LAW REVIEW                                    [Vol. 78

explains skepticism about YouTube’s business model76 and fears about the
rise of malware, botnets, and denial-of-service attacks.77
   If, as the Tragic story predicts, bigger means riskier, then the immense
Internet ought to be an immense smoking ruin. Peer-to-peer file sharing
and video downloads always look poised to overwhelm capacity in a host of
I-want-mine overuse. Just as soon as a few more people crank up their
usage, or one really clever bad apple figures out how to use it all up, it’ll be
the endgame for the Internet. As of this writing though, the Internet still
stands, the Tragic story notwithstanding. To make sense of why that might
be, let’s return to commons theory—another strand of which has come to
almost exactly the opposite conclusion.

                   C. The Comedy of the (Nonrival) Commons
   So far, we’ve been discussing rival resources, in the left-hand column of
the diagram. Here, to repeat, the traditional economic view is that
efficiency flows from excludability; commons theory accepts that view but
offers a different, bottom-up way of creating exclusivity.
   Now, let’s turn to the right-hand column, where nonrival resources dwell.
Once again, traditional economic theory prizes excludability, albeit with
considerably less certitude than for rival resources. But this time, commons
theory takes an altogether more radical turn—arguing that exclusivity itself
is overrated.
   The starting point of the analysis is that many nonrival goods can be
shared with others for much less than it costs to make them in the first
place. Information in digital form, for example, can be copied and
transmitted around the world for almost nothing. But even tangible goods
can often be shared without imposing costs on current users: at 2:00 a.m., a
second car on the road doesn’t limit the first driver’s ability to use the
highway, too.78

    76. See, e.g., Farhad Manjoo, Do You Think Bandwidth Grows on Trees?, SLATE, Apr.
14, 2009, http://www.slate.com/id/2216162/ (“YouTube has to pay for a gargantuan Internet
connection to send videos to your computer and the millions of others who are demanding
the most recent Dramatic Chipmunk mash-up. . . . [N]ot even Google can long sustain a
company that’s losing close to half a billion dollars a year.”).
    77. See, e.g., ZITTRAIN, supra note 2, at 43–54 (describing the “untenable” state of
online security).
    78. Brett M. Frischmann, An Economic Theory of Infrastructure and Commons
Management, 89 MINN. L. REV. 917, 945–46 (2005). To be more precise, as Brett
Frischmann explains, goods vary in their capacity to accommodate multiple uses. But even a
good with a finite capacity can still be effectively nonrivalrous if that capacity is also
renewable. Id. at 950–56. A stretch of highway may be able to accommodate 2000 cars per
hour, but its use at 6:00 a.m. has essentially no effect on its ability to accommodate cars at
6:00 p.m. As long as we’re beneath the level of use at which adding cars would create a
traffic jam now, the highway is nonrival. Yochai Benkler has developed this point into a
theory of “sharable” goods. See Yochai Benkler, Sharing Nicely: On Shareable Goods and
the Emergence of Sharing as a Modality of Economic Production, 114 YALE L.J. 273, 330–
44 (2004).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2811

   Where the nonrival resource is also nonexcludable, and thus a pure
public good, this is a problem. As soon as the seller has created the good—
say, a photograph of a sheep—everyone else can have access to it for free;
only suckers and patsies would pay for it if they didn’t have to. But that
leaves the photographer with no economic incentive to go out and spend
days taking the perfect photograph, so she won’t create it in the first place,
which leaves no original for others to copy. Result: everyone loses.79
   The conventional response here has been to focus on making the resource
more excludable.80 This is the usual economic apology for granting a
photographer copyright over her photographs, for example.81 It allows her
to prevent the nonrival sharing that would otherwise flood the market with
cheap copies and undercut her ability to recoup her costs.82 The same logic
also explains various self-help substitutes for intellectual property, such as
end-user license agreements and digital rights management (DRM): they
all recreate excludability.83 If she can move all the way up the right-hand
column and make the resource perfectly excludable, then she can capture
the full social value of her work, giving her an efficient incentive for the
optimal level of creativity.84 In this respect, at least, full excludability
makes public nonrival goods look like private rival goods.85
   But excludability’s prevention of free riding is not a free lunch. For one
thing, it’s expensive to establish: IP laws have to be enforced, licenses have
to be drafted, and DRM has to be programmed.86 For another, excludability
can be harmful in itself. Even though the good (being nonrival) could be
shared freely or cheaply, a rational owner will instead price it to maximize
her profits. But that means she’ll sell it for more than some people would

    79. See WILLIAM M. LANDES & RICHARD A. POSNER, THE ECONOMIC STRUCTURE OF
INTELLECTUAL PROPERTY LAW 19–21 (2003).
    80. See, e.g., U.S. CONST. art. I, § 8, cl. 8 (“To promote the Progress of Science and
useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to
their respective Writings and Discoveries.” (emphasis added)).
    81. See, e.g., LANDES & POSNER, supra note 79, at 112–13.
    82. See Trotter Hardy, Property (and Copyright) in Cyberspace, 1996 U. CHI. LEGAL F.
217, 221–28.
    83. See Niva Elkin-Koren, Making Room for Consumers Under the DMCA, 22
BERKELEY TECH. L.J. 1119, 1119–20 (2007).
    84. See Demsetz, supra note 49, at 300–03 (arguing that the resulting equilibrium
“allocates resources efficiently to the production of the public good”).
    85. There are, however, other important ways in which they differ. Because these
nonrival goods have high fixed (or first-copy) costs but very low marginal costs, there’s an
enormous competitive advantage to being the bigger competitor in a market. Your average
costs will be lower than your competitors, helping you undercut their prices and seize the
whole of the market. This gives these markets—one kind of “network industry”—distinctive
economics and creates special managerial and regulatory challenges. See generally CARL
SHAPIRO & HAL R. VARIAN, INFORMATION RULES: A STRATEGIC GUIDE TO THE NETWORK
ECONOMY (1999); OZ SHY, THE ECONOMICS OF NETWORK INDUSTRIES (2001). The
semicommons analysis developed in this essay may have implications for these industries.
    86. It also has other unfortunate side effects: it cripples otherwise useful devices and
smothers innovation. See Wendy Seltzer, The Imperfect Is the Enemy of the Good:
Anticircumvention Versus Open Development, 25 BERKELEY TECH. L.J. (forthcoming 2010).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2812                         FORDHAM LAW REVIEW                                     [Vol. 78

have been willing to pay;87 the good ends up being used less than would
have been socially efficient.88 Thus, the conventional economic narrative
of intellectual property law is of a dialectic pitting her ex ante incentives to
create the information good against the ex post value of broad access to it.89
Whatever balance we choose is likely to impose costs on both sides.
   Scholars have therefore looked for ways to avoid the difficulties of
finding market incentives to create public goods. One approach is right
there in the name: the government could directly invest in these “public”
goods. That’s the conventional way of paying for physical public goods,
like lighthouses.90 At times, governmental investment has also been used to
pay for information goods, such as the NEA’s grants to artists and prizes for
scientific discoveries.91 Once government funding succeeds in bringing
these goods into existence, they can be given away freely. Voilà: no costs
from imposing excludability.92
   Commons theory takes this idea—maximal circulation of information
goods at no cost—and runs with it. The key move is the recognition that
solving the ex post distribution problem can also, paradoxically, help solve
the ex ante production problem.93 Making information more widely
available doesn’t just benefit passive couch-potato information consumers;



    87. In a world in which she cannot price discriminate perfectly and costlessly, that is. If
she could, perfect price discrimination would also in theory lead to an efficient outcome, one
in which she appropriates all the value of the good, rather than other users. See James Boyle,
Cruel, Mean, or Lavish? Economic Analysis, Price Discrimination and Digital Intellectual
Property, 53 VAND. L. REV. 2007, 2021–35 (2000) (discussing “Econo-World” view of price
discrimination). But that world isn’t our world, and, in ours, price discrimination is costly
and imperfect, leaving us to argue over second bests. See, e.g., Yochai Benkler, An
Unhurried View of Private Ordering in Information Transactions, 53 VAND. L. REV. 2063,
2072 (2000).
    88. See Niva Elkin-Koren, Copyright Policy and the Limits of Freedom of Contract, 12
BERKELEY TECH. L.J. 93, 99–100 (1997).
    89. See Shyamkrishna Balganesh, Forseeability and Copyright Incentives, 122 HARV. L.
REV. 1569, 1577–79 (2009).
    90. See R.H. Coase, The Lighthouse in Economics, 17 J.L. & ECON. 357, 360–62 (1974)
(discussing British lighthouse system, maintained by Trinity House, a governmental body).
But see id. at 363–72 (discussing history of private lighthouses in Britain).
    91. See, e.g., MASHA GESSEN, PERFECT RIGOR: A GENIUS AND THE MATHEMATICAL
BREAKTHROUGH OF THE CENTURY, at vii–xi (2009) (predicting that the $1,000,000 prize for
proof of Poincaré Conjecture is likely to be refused by mathematician who proved it); DAVA
SOBEL, LONGITUDE: THE TRUE STORY OF A LONE GENIUS WHO SOLVED THE GREATEST
SCIENTIFIC PROBLEM OF HIS TIME 16 (2006) (describing the £20,000 prize offered in 1714 for
discovery of an accurate method of determining longitude while at sea).
    92. On the other hand, having the government pay for it doesn’t solve the problem of
deciding how much to pay for it. Here, it’s even more difficult to decide how much the
government should spend for the sheep photograph. Since the photograph will ultimately be
given away for free, the government will find it well-nigh impossible to learn how much
each individual would have been willing to pay for it. See Paul A. Samuelson, The Pure
Theory of Public Expenditure, 36 REV. ECON. & STAT. 387, 388–89 (1954).
    93. See Frischmann, supra note 78, at 946–59.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                         4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                         2813

it also helps other information producers.94 Information goods are critical
inputs into the production of other information goods, so increasing their
circulation gives creators more to work with.
   Information is the oxygen of the mind; lowering the cost of air lets minds
breathe more freely.95 All creativity is influenced and inspired by what has
come before; all innovation incrementally builds on past inventions. The
public domain is not simply a negative space of the unprotected, but a
positive resource of immense richness available to all.96 On this account,
reducing the excludability of nonrival information goods will often lead to
more information production, not less, because the reduced incentives for
creators will be more than outweighed by the increased access to raw
materials.97
   In a further twist, scholars of the information commons have argued that
often we don’t need any external incentives for the production of
information goods.98 In these cases, we can dispense with excludability
completely. Some people take photographs of sheep because they want the
pictures for themselves; others want to express a vision of pastoral serenity;
still others want to hone their skills with a camera, or to show off those
skills to potential employers. This diversity of motivations means that even
though the vast majority of photographers in the world are unpaid, they’re
still enthusiastically snapping pictures. Steven Weber’s studies of open
source software and Yochai Benkler’s theory of peer production emphasize
that personal expression, generosity, reciprocity, desire to show off, and
other purely social motivations can be just as strong as economic ones.99



    94. Indeed, one of the other virtues of commons theory is its willingness to recognize
that “consumers” and “producers” are often the exact same people, that individuals move
between these roles seamlessly in their cultural, social, and intellectual lives. See, e.g., Jack
M. Balkin, Digital Speech and Democratic Culture: A Theory of Freedom of Expression for
the Information Society, 79 N.Y.U. L. REV. 1, 4 (2004) (“Freedom of speech . . . is
interactive because speech is about speakers and listeners, who in turn become speakers
themselves. . . . [I]ndividual speech acts are part of a larger, continuous circulation.”). The
idea, however, has led to some unfortunate portmanteaus. See, e.g., DON TAPSCOTT &
ANTHONY D. WILLIAMS, WIKINOMICS: HOW MASS COLLABORATION CHANGES EVERYTHING
124–27 (2008) (“prosumer”); Erez Reuveni, Authorship in the Age of the Conducer, 54 J.
COPYRIGHT SOC’Y U.S. 285, 286–87 (2007) (“conducer”).
    95. Cf. Int’l News Serv. v. Associated Press, 248 U.S. 215, 250 (1918) (Brandeis, J.,
dissenting) (“The general rule of law is, that the noblest of human productions—knowledge,
truths ascertained, conceptions, and ideas—become, after voluntary communication to
others, free as the air to common use.”).
    96. See generally JAMES BOYLE, THE PUBLIC DOMAIN: ENCLOSING THE COMMONS OF THE
MIND (2008).
    97. See Frischmann, supra note 78, at 990–1003 (arguing that this claim is likely to hold
for certain kinds of infrastructural information).
    98. See, e.g., Yochai Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm,
112 YALE L.J. 369, 423–36 (2002).
    99. See BENKLER, supra note 46, at 59–90; STEVEN WEBER, THE SUCCESS OF OPEN
SOURCE (2004).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                     4/29/2010 11:26 PM




2814                         FORDHAM LAW REVIEW                                   [Vol. 78

   When they think about the ideal scale of an information commons, these
thinkers generally say, “the more the merrier.”100 There are network effects
from increased participation; the more people who are sharing with you, the
greater the riches available for you to draw on. They don’t cost you
anything; indeed they may actively help out your own creative processes,
for example by pointing out bugs in your open-source software.101 If the
community is engaged in cooking up a batch of informational stone soup,
the larger the community grows, the richer the soup becomes, and the less
of a burden the cooking places on any individual member.102 Moreover, as
Benkler argues, increased community scale leads to more opportunities for
productive collaboration, so that sharing catalyzes creativity and vice-versa,
accelerating the virtuous circle.103
   Let’s call this strain of commons theory the “Comedic” story.104 It
applies to nonrival resources, and particularly to information goods. To
review, its basic argument is that repudiating excludability is often better
than embracing it. Since the resources are nonrival, free riding poses no
threat of waste. Instead, a commons ensures the maximum possible use of
valuable information, avoiding the waste associated with exclusive rights.
The incentives to produce and share come from the internal and social
motivations of participants, motivations that under the right circumstances
may even be supplied by the commons itself.
   Like its doppelgänger, the Comedic story has also frequently been told
about the Internet.105 The blogosphere, built on an ethos of sharing one’s
own thoughts and linking to others’, is numerically dominated by
noncommercial blogs written for personal reasons; even bloggers who make
money selling ads still give the actual words away.106 The last half-decade
on the Web has been the great era of UGC sites like YouTube, Flickr,
Facebook, and Twitter—all of which offer users access to content uploaded,
for unpaid sharing, by other users. Sharing makes the Web go round.107

   100. The phrase comes from Carol Rose, The Comedy of the Commons: Custom,
Commerce, and Inherently Public Property, 53 U. CHI. L. REV. 711, 768 (1986).
   101. See ERIC S. RAYMOND, THE CATHEDRAL AND THE BAZAAR: MUSINGS ON LINUX AND
OPEN SOURCE BY AN ACCIDENTAL REVOLUTIONARY 33–36 (2001).
   102. Eric Raymond gives the metaphor of a “magic cauldron” that produces soup ex
nihilo, then argues that open source software is that cauldron made real. See id. at 115.
   103. BENKLER, supra note 46; Benkler, supra note 98, at 415 (“[T]here are increasing
returns to the scale of the pool of individuals, resources, and projects to which they can be
applied.”).
   104. The inspiration for this term comes from Carol Rose’s remarkable Comedy of the
Commons, supra note 100.
   105. See, e.g., Benkler, supra note 98, at 404–05.
   106. See SCOTT ROSENBERG, SAY EVERYTHING: HOW BLOGGING BEGAN, WHAT IT’S
BECOMING, AND WHY IT MATTERS 163–97 (2009).
   107. Cf. Eben Moglen, Anarchism Triumphant: Free Software and the Death of
Copyright, FIRST MONDAY, Aug. 2, 1999, http://firstmonday.org/htbin/cgiwrap/bin/ojs/
index.php/fm/article/view/684/594 (“So Moglen’s Metaphorical Corollary to Faraday’s Law
says that if you wrap the Internet around every person on the planet and spin the planet,
software flows in the network. It’s an emergent property of connected human minds that
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2815

There’s also a strong argument that many of these sharing-based sites are
successfully outcompeting their more restricted competitors. Wikipedia’s
outrageous success, as compared with Nupedia, Citizendium, Knol,
Encarta, and every other would-be online encyclopedia, could reasonably
be attributed to its extraordinary openness to unfiltered contributions from
anyone.108
   And on a deeper level, one noted by both Post109 and Zittrain,110 the
Internet is itself largely built with nonproprietary technical standards, free
for anyone to reuse and implement for themselves.111 Even when private
companies develop and commercialize services, they’ve thrived best when
the companies have released well-documented public interfaces, free for
anyone to use and build upon in making new mashup applications.112 Even
much of the software on which the Internet itself runs is the freely shared
product of collaborative open-source development, carried out
collaboratively worldwide . . . on the Internet.113

                               III. THE SEMICOMMONS
   It should by now be clear that the Tragic and Comedic stories point in
diametrically opposite directions. The Tragic story embraces exclusion; it
tells us that the only way to make a commons work is to make it small and
jealously keep outsiders out.114 The Comedic story rejects exclusion; it
tells us that the best way to make a commons thrive is to make it large and
invite in as many participants as possible.115 As applied to the immensity
of the Internet, the Comedic story predicts utopia and the Tragic story
predicts utter devastation.
   The Internet’s success at scale suggests that there must be something to
the Comedic story’s optimism, but so far, we have no good theoretical

they create things for one another’s pleasure and to conquer their uneasy sense of being too
alone.”).
   108. See, e.g., ZITTRAIN, supra note 2, at 133 (closure of Nupedia); Randall Stross,
Encyclopedic Knowledge, Then vs. Now, N.Y. TIMES, May 3, 2009, at BU3 (end of Encarta);
On      Wikipedia,      On     Citizendium,     http://onwikipedia.blogspot.com/2010/01/on-
citizendium.html (Jan. 18, 2010, 16:06) (“stagnat[ion]” of Citizendium); Erick Schonfeld,
Poor Google Knol Has Gone from a Wikipedia Killer to a Craigslist Wannabe,
TECHCRUNCH, Aug. 11, 2009, http://techcrunch.com/2009/08/11/poor-google-knol-has-
gone-from-a-wikipedia-killer-to-a-craigslist-wannabe/ (decline of Knol).
   109. POST, supra note 1, at 133–41.
   110. ZITTRAIN, supra note 2, at 141.
   111. These same standards were in many cases developed in open participatory processes,
where all-important decisions were made on a consensus basis. See MILTON L. MUELLER,
RULING THE ROOT: INTERNET GOVERNANCE AND THE TAMING OF CYBERSPACE 89–94 (2002).
   112. See Tim O’Reilly, What Is Web 2.0, O’REILLY, Sept. 30, 2005, http://oreilly.com/
web2/archive/what-is-web-20.html; cf. ZITTRAIN, supra note 2, at 181–85 (discussing “API
neutrality”).
   113. See generally GLYN MOODY, REBEL CODE: THE INSIDE STORY OF LINUX AND THE
OPEN SOURCE REVOLUTION (2001).
   114. See supra Part II.B.
   115. See supra Part II.C.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2816                        FORDHAM LAW REVIEW                                   [Vol. 78

reason to pick one or the other. The Tragic story seems perfectly plausible
as a description of the sad fate awaiting all of the shared and all-too-
exhaustible aspects of the Internet: bandwidth, server space, processor
cycles, and human attention. The Comedic story seems equally plausible as
a description of the great achievements that result from instant, inexpensive,
worldwide sharing of inexhaustible information. Our task for this part will
be to reconcile the two.

                         A. The Open-Field Semicommons
  I’d like to suggest that the right abstraction is Henry Smith’s theory of a
semicommons:
     In a semicommons, a resource is owned and used in common for one
     major purpose, but, with respect to some other major purpose, individual
     economic units—individuals, families, or firms—have property rights to
     separate pieces of the commons. Most property mixes elements of
     common and private ownership, but one or the other dominates. . . . In
     what I am calling a semicommons, both common and private uses are
     important and impact significantly on each other.116
On Smith’s account, a resource must satisfy two conditions to be a good
candidate for semicommons ownership. There must be multiple possible
uses of the resource that are efficient at different scales, so that one use is
naturally private, and one is naturally common. These uses must also have
significant positive interactions with each other, so that there will be a
benefit from allowing both rather than choosing one. The combination of
scale mismatch and positive interactions offers rewards for mixing private
and common.117
   Smith’s “archetypal example of a semicommons is the open-field system
of medieval [Europe].”118 Sheep could be grazed freely across the fields of
a village during fallow seasons, but during growing seasons, individual
farmers had exclusive rights to their strips of land. The same fields were
held in common for grazing and privately for farming: a semicommons.
As he shows, the open-field system displayed both scale mismatch and
positive interactions.119
   First, the two valuable uses of its land—grazing sheep and raising
crops—were efficient at different scales. Medieval grazing had scale
economies: one shepherd could watch a large flock on a correspondingly
large pasture.120 Medieval farming was a labor-intensive, small-scale
affair. Each farmer could plow, seed, tend, and harvest only a limited


  116. Smith, supra note 6, at 131–32.
  117. See id. at 168.
  118. Id. at 132. Yes, sheep and pastures again. Hardin’s tragic commons and Ostrom’s
potentially sustainable one are the same as Smith’s semicommons, just theorized differently.
  119. Id.
  120. Id. at 135–36.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                    2817

quantity of land. Nor was there much benefit in teaming up; two men
couldn’t plow the same furrow, and combining holdings would have
tempted each farmer to shoulder less than his share of the work.121
   Thus, grazing made sense as a commons in which each villager was
entitled to contribute sheep to a large flock grazed across large tracts of
land. This is a natural governance regime: the extent of each villager’s use
could easily be monitored by counting his sheep. On the other hand,
farming made sense as private property in which each villager farmed his
own small plot of land. This is a natural exclusion regime: it’s easy to tell
who’s harvesting from which piece of land.122
   As for positive interactions, the same land could profitably be used for
both farming and grazing. Land needs to sit fallow between growing
seasons;123 a village might as well graze sheep during the off-season.124
Better still, the best source of fertilizer for the fields was the manure left
behind by the sheep.125 Thus, the private plots of land worked better for
their private purpose because they were also open to the common use of
grazing sheep.
   In addition to explaining why a semicommons might come into being,
Smith’s theory also explains some of the distinctive threats it will face from
strategic behavior.126 I’d like to emphasize three. First, in a semicommons,
users will be tempted not just to overuse the common resource, but to
strategically dump the costs onto others’ private portions, bearing none of
the costs themselves.127 On a rainy day, when trampling hooves will do the
most damage and create the most mud, a shepherd might be tempted to
direct the herd onto someone else’s plot of land and well away from his
own.128 Second, users may be tempted to take expensive and socially
wasteful precautions to guard against others’ strategic uses—say, sitting
outside all day in the rain to watch the shepherd.129 And third, private users
will be tempted to disregard the commons use in pursuit of their private
gain;130 imagine a farmer who decides to plant a profitable crop that’s
poisonous to sheep. The semicommons only makes sense if the benefits
from combining the private and common uses outweigh the costs from
these kinds of strategic behavior.131
   Semicommons also have important strategies for dealing with these
challenges. One is sharing rules, in which some of the private portions of

  121.   See id. at 136–38.
  122.   See Robert C. Ellickson, Property in Land, 102 YALE L.J. 1315, 1327–30 (1993).
  123.   And sometimes during them, as required by crop rotation.
  124.   Smith, supra note 6, at 132, 135.
  125.   Id. at 136.
  126.   Id. at 138–41.
  127.   Id. at 138–39.
  128.   Id. at 149.
  129.   Id. at 140–41.
  130.   Id. at 141.
  131.   Id. at 141–42.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                   4/29/2010 11:26 PM




2818                        FORDHAM LAW REVIEW                                  [Vol. 78

the resource are collected and divided among the various users.132 Smith
gives the example of general average in admiralty, in which all those with
an interest in a ship or its cargo must contribute proportionately to
reimburse anyone whose property is damaged in the course of avoiding a
common peril.133 That eliminates the captain’s temptation to throw other
people’s cargo overboard first. In our hypothetical village, pooling some of
the crops after each season would be a sharing rule protecting victims of
excessive trampling.
   Another characteristic semicommons device is boundary-setting.134
Smith’s example here is scattering; each villager’s landholdings were
divided into multiple small strips in different fields, rather than one larger
plot.135 Scattering was costly; farmers sometimes got confused about
which strip was theirs.136 But it also made abusive herding less attractive.
With many thin strips, it’s harder for the shepherd to park the sheep over his
own plot while they poop, and over someone else’s plot while they stomp.
Getting the property boundaries right thus helps prevent strategic behavior.

                           B. The Internet Semicommons
   Smith’s semicommons model accurately describes the Internet. We’ll
see every element of it online: private and common uses of the same
resource, efficient at wildly different scales, but productively intertwined;
strategic behavior that also causes these uses to undermine each other;137
sharing rules and boundary-setting to keep the whole thing functioning.138
The productive but fraught interplay between private and common uses in a
semicommons elegantly captures the tension between the Tragic and
Comedic stories on the Internet.
   Let’s start with the private and common uses. On the one hand, the
computers and network cables that actually make up the Internet are private
personal property, managed via exclusion. My laptop is mine and mine
alone. If I decide to laser-etch its case, or to wipe it clean and reinstall the
operating system, no one can stop me. Nor can anyone else decide what
outlet it’s plugged into; if you try to treat it as common property and take it
home with you, you’ll be brought up on charges. The same goes for
Rackspace’s servers139 and Level3’s fiber-optic network:140 private
property, all of it.

   132. Id. at 142 & n.27.
   133. Id.
   134. Id. at 161–67.
   135. Id. at 146–54.
   136. Id. at 147–48.
   137. See infra Part IV.A.
   138. See infra Part IV.C.
   139. See Rackspace, Definitions and Technical Jargon of the Hosting Industry,
http://www.rackspace.com/information/hosting101/definitions.php (last visited Apr. 10,
2010) (“In a managed hosting environment, the provider owns the data centers, the network,
the server and other devices, and is responsible for deploying, maintaining and monitoring
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                         4/29/2010 11:26 PM




2010]                 THE INTERNET IS A SEMICOMMONS                                        2819

   On the other hand, as a communications platform,141 the Internet is
remarkably close to a commons, managed via governance in the form of
technical standards and protocols.142 Fill out an IP datagram with the 32-bit
IP address of a computer you’d like to send a message to, and dozens of
routers will cheerfully cooperate to get it there.143 The destination
computer has also likely been configured to collaborate with you. Send it
an HTTP GET message and you’ll get back a Web page;144 send it an
SMTP HELO and it will get ready to accept an e-mail from you.145 With
this kind of support—yours for the asking—you and your friends can set up
a new website, a new online application, a new protocol, a new peer-to-peer
network, a new whatever you want. That’s common use of a fairly
profound sort—precisely as Post and Zittrain describe.146
   Next, these private and common uses are efficient at different scales.
Private property makes sense for individual computers and cables; the




them. . . . Dedicated Hosting . . . allows customers to lease pre-configured, dedicated
equipment and connectivity from the provider.”).
  140. See Level3, Our Network, http://www.level3.com/index.cfm?pageID=242 (last
visited Apr. 10, 2010) (“The Level 3 Communications® Network today operates as one of
the largest IP transit networks in North America and Europe.”).
  141. To be clear, I’m focusing more on tangible communications platform as a common
use, rather than the intangible information exchanged on it. The nonrivalry of information
and the Comedic story explain why the Internet is so valuable as a communications platform;
they don’t actually make the Internet into information or eliminate the challenges of rivalry.
We’ve faced the problem of exchanging nonrival information over rival communications
infrastructure for a long time; the Internet is just better at the task than its predecessors. See
generally Frischmann, supra note 78, at 1005–22 (describing the Internet as infrastructure).
Robert Heverly has written an important and illuminating article on how intellectual property
law makes information a semicommons of positively interacting private and common uses.
Robert A. Heverly, The Information Semicommons, 18 BERKELEY TECH. L.J. 1127 (2003);
see also Lydia Pallas Loren, Building a Reliable Semicommons of Creative Works:
Enforcement of Creative Commons Licenses and Limited Abandonment of Copyright, 14
GEO. MASON L. REV. 271 (2007); Robert P. Merges, A New Dynamism in the Public Domain,
71 U. CHI. L. REV. 183 (2004) (describing Creative Commons as a “semicommons” in
Heverly’s sense).
  142. See generally LAURA DENARDIS, PROTOCOL POLITICS: THE GLOBALIZATION OF
INTERNET GOVERNANCE (2009) (discussing governance role of Internet standards).
  143. See 1 DOUGLAS COMER, INTERNETWORKING WITH TCP/IP: PRINCIPLES, PROTOCOLS,
AND ARCHITECTURE (5th ed. 2006) (describing IP datagram format and routing); INFO. SCIS.
INST., RFC 791, INTERNET PROTOCOL:                 DARPA INTERNET PROGRAM PROTOCOL
SPECIFICATION § 2.3 (1981), http://tools.ietf.org/html/rfc791 (“The function or purpose of
Internet Protocol is to move datagrams through an interconnected set of networks. This is
done by passing the datagrams from one internet module to another until the destination is
reached.”).
  144. See generally DAVID GOURLEY & BRIAN TOTTY, HTTP: THE DEFINITIVE GUIDE
(2002) (describing HTTP standard).
  145. See JOHN RHOTON, PROGRAMMER’S GUIDE TO INTERNET MAIL: SMTP, POP, IMAP,
AND LDAP (2000); JONATHAN B. POSTEL, RFC 821, SIMPLE MAIL TRANSFER PROTOCOL
(1982), http://tools.ietf.org/html/rfc821 [hereinafter RFC 821] (detailing SMTP standard).
  146. See POST, supra note 1, at 101–03; ZITTRAIN, supra note 2, at 26–35.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2820                         FORDHAM LAW REVIEW                                     [Vol. 78

Tragic story warns us to restrict access.147 Computers are rival: they can
be stolen, hijacked, or crashed. Hardware remains expensive enough that
people will try to get their hands on it (physically or virtually), and when
they succeed, it creates real costs for others.148 Private property empowers
individual owners to protect against laptop thieves, virus writers, and botnet
wranglers.149
   Moreover, private use of computers aligns incentives well. Computers
require their owners to invest time, effort, and money: buying the
hardware, setting up the software, keeping the power and bandwidth
flowing. An exclusion regime both allows and encourages computer
owners to select the configuration that’s value-maximizing for them. I use
my laptop to write papers, check e-mail, and surf the Web wirelessly; you
use your broadband-connected desktop to run regressions and play World
of Warcraft. Our ideal computers are profoundly different; asking us to
play sysadmin for each other would only pile up the agency costs. All of
this pushes towards small-scale private ownership.
   On the other hand, commons use makes sense when we look at the
Internet as a communications platform. The Comedic story tells us that
where information exchange is concerned, we should design for the widest
participation possible. A communications network’s value plummets if it’s
fragmented.150 If you have something to say to even one other person, you
may as well post it publicly, so that anyone else can take advantage of it.
When you do, better to share with the world than with any smaller group.151
   Further, this large-scale communications platform wouldn’t work
efficiently if it were private. Scholars have noted the immense transaction



   147. Cf. Henry E. Smith, Governing the Tele-Semicommons, 22 YALE J. ON REG. 289
(2005) (applying semicommons theory to argue against the use of common property
treatment of individual physical network elements).
   148. See Joris Evers, Computer Crime Costs $67 Billion, FBI Says, CNET NEWS, Jan. 19,
2006,           http://news.cnet.com/Computer-crime-costs-67-billion,-FBI-says/2100-7349_3-
6028946.html.
   149. See, e.g., ZITTRAIN, supra note 2, at 159 (advocating “a simple dashboard that lets
the users of PCs make quick judgments about the nature and quality of the code they are
about to run”). Such sentiments assume that users have the sort of autonomy over their PCs
that a private property owner would, a principle Zittrain strongly endorses. See id. at 108–09.
   150. See Andrew Odlyzko & Benjamin Tilly, A Refutation of Metcalfe’s Law and a
Better Estimate for the Value of Networks and Network Interconnections 4 (Mar. 2, 2005)
(unpublished manuscript), available at http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf
(arguing that the value of an n-user network grows as n log n); see also Bob Briscoe et al.,
Metcalfe’s Law Is Wrong, IEEE SPECTRUM, July 2006, at 35 (later version of Odlyzko &
Tilly article).
   151. Lauren Gelman observes that users often post sensitive information in publicly
accessible ways online. Her point is that public accessibility allows you to reach others who
share your interests, even when you couldn’t identify them at the time of the posting. The
value of reaching them can outweigh even significant privacy risks of being noticed by
outsiders. See Lauren Gelman, Privacy, Free Speech, and “Blurry-Edged” Social Networks,
50 B.C. L. REV. 1315, 1334–35 (2009).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2821

costs of negotiating individual permission from every system owner,152
along with the potential anticommons holdup problems.153 It’s almost a
mantra at this point that if you want to create a successful online service,
you need to make it free and freely available.154 And even as ISPs threaten
to introduce stringent usage caps, they can barely manage to tell customers
how much bandwidth they’re actually using.155 The commons has a
powerful hold on the Internet as communications platform.
   In other words, the Tragic story is right, for the individual computers and
wires that constitute the Internet. It recommends private property, which is
indeed how these physical resources are held. And the Comedic story is
also right, for those computers and wires considered as the communications
platform that is the Internet. It recommends an unmanaged commons,
which is indeed how this virtual resource is held.
   Not only are both stories right, they’re right at wildly different scales.
Remember how there are over a billion users and over two hundred million
websites on the Internet?156 That’s a difference of eight or nine orders of
magnitude between the scale of the individual computers on which private
owners operate and the scale of the worldwide common platform. That
divergence isn’t a one-time anomaly; it’s what the Internet does, every
millisecond of every day, everywhere in the world.
   Satisfying Smith’s other condition, these two uses “impact . . . on each
other”157 profoundly and positively. You couldn’t build a common global
network without the private infrastructure to run it on, but most of the value
of that infrastructure comes from the common global network dancing atop
it. To see why it’s the interaction that adds the value, remember that the
private owners could disconnect their computers from the semicommons at
any moment—and choose not to. Would you buy a computer incapable of
being connected to the Internet? Neither would I. This is a semicommons
that works.       Indeed, the Internet is probably the greatest, purest
semicommons in history.




   152. Paul Ohm has pointed out that in many cases, just trying to meter or monitor these
information flows—a necessary step in privatizing them—would in many circumstances be
ruinously costly. See Paul Ohm, The Rise and Fall of Invasive ISP Surveillance, 2009 U. ILL.
L. REV. 1417.
   153. See, e.g., Dan Hunter, Cyberspace as Place and the Tragedy of the Digital
Anticommons, 91 CAL. L. REV. 439, 500–14 (2003).
   154. See Chris Anderson, Free! Why $0.00 Is the Future of Business, WIRED, Mar. 2008,
at 140.
   155. See, e.g., Jesse Kirdahy-Scalia, One Year After Capping Bandwidth, Comcast Still
Offers No Meter, OPEN MEDIA BOSTON, Aug. 21, 2009, http://www.openmediaboston.org/
node/860.
   156. See supra notes 22–23 and accompanying text.
   157. Smith, supra note 6, at 132.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2822                          FORDHAM LAW REVIEW                                   [Vol. 78

                          C. The Generative Semicommons
   In a moment, we’ll complete this portrait of the Internet semicommons
by looking at its characteristic forms of strategic behavior. But first, I’d
like to point out how semicommons theory offers a fresh perspective on
generativity. The Future of the Internet describes the generativity of both
the personal computer (PC)158 and the Internet,159 which map neatly onto
the private and common aspects of the Internet.
   A generative PC is one that its owner fully controls. Not a tethered
appliance that’s physically yours but practically under someone else’s
governance.160 Not a cloud service that you might be excluded from
tomorrow.161 And not an insecure box actually under the control of a
shadowy Elbonian hacker syndicate. No, the generative PC is your private
property, yours to do with exactly as you choose.
   The generative Internet, on the other hand, is defined by its
connectedness and commonality. Once you have a great new hack, the best
thing to do is to send it out to others, so they can replicate its benefits for
themselves and build their own improvements on it. That means the
network ought to be as common as possible; you should be able to share
your innovations with anyone, not subject to any third party’s veto.
Although Zittrain rejects a “categorical” end-to-end principle,162 he
treasures the way that the Internet’s lack of control permits a “flexible,
robust platform for innovation.”163
   The two stages of generativity—creating and sharing—thus map onto
private and common, respectively. The cycle works best when the two are
not just available, but conjoined. Generativity is the story of the Internet as
innovating semicommons.

                                    IV. CHALLENGES
   As a semicommons, the Internet is valuable because it combines private
and commons uses. Merely saying that it does, however, gives little
guidance on how to make this coexistence work. Indeed, one of Smith’s
central points is that problems of strategic behavior are actually worse in a
semicommons than in a pure commons.164 His account of the open-field
system doesn’t just explain why it might make sense to treat the village
fields as a semicommons; it also describes how villagers were able to solve
these strategic behavior problems.165 In so doing, he also provides a


  158.   ZITTRAIN, supra note 2, at 11–18.
  159.   Id. at 19–35.
  160.   Id. at 101–04.
  161.   Jonathan Zittrain, Op-Ed., Lost in the Cloud, N.Y. TIMES, July 20, 2009, at A19.
  162.   ZITTRAIN, supra note 2, at 165.
  163.   Zittrain, supra note 33, at 1990.
  164.   Smith, supra note 6, at 138–39.
  165.   See supra Part II.A.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                              4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                               2823

potentially more satisfying explanation of one of the characteristic features
of the open fields: scattering.166
   This part will do the same for the Internet: link the theoretical question
of the mitigation of strategic behavior in a semicommons to well-known
descriptive characteristics of the Internet. I’ll argue that the success of
many Internet technical features and online institutions can be cleanly
explained in terms of their ability to overcome semicommons dilemmas.
Part IV.A discusses “layering,” one of the Internet’s basic technical
characteristics, in which one protocol provides a clean abstraction atop
which another can run, and so on repeatedly. Layering helps mediate the
private/common interface and helps prevent these uses from interfering
with each other. Part IV.B takes up UGC sites, which exhibit a consistent
correlation between a community and a particular piece of infrastructure.
This linkage enables them to use governance rather than exclusion to detect
and prevent misuse of their little corner of the Internet. And Part IV.C
looks at some problems of boundary-setting, in particular how the Usenet
system of distributed bulletin boards has failed to cope with strategic
behavior. Its experience tells us much about the challenges of institution
design on the Internet.

                                      A. Layering
  The term “layering” comes from computer science, where it describes the
division of a system into components, each of which only interacts with the
ones immediately “above” or “below” it.167 Programmers and system
designers use layered architectures because they enable modularity: each
component can be designed without needing to know how the others work,
which reduces the complexity of the programming task and simplifies
debugging by reducing interactions between components.168
  On the Internet, layering is prevalent. When one user writes another an
e-mail, the actual exchange typically involves six or more layers. The e-
mail (“content” layer) is sent from one e-mail program to another
(“application” layer) using the Simple Mail Transport Protocol (another
application, but one operating at the service of e-mail programs), which
uses the Transmission Control Protocol to open a stable connection from
one computer to the other (“transport” layer). That connection, in turn, is

  166. Smith, supra note 6, at 146–54 (arguing that border-setting semicommons
explanation of scattering is superior to other economic explanations).
  167. See 1 W. RICHARD STEVENS, TCP/IP ILLUSTRATED: THE PROTOCOLS 1–6 (1994);
ANDREW S. TANENBAUM, COMPUTER NETWORKS 26 (4th ed. 2003); TELECOMM.
STANDARDIZATION SECTOR, INT’L TELECOMM. UNION, ITU–T RECOMMENDATION X.200,
INFORMATION TECHNOLOGY—OPEN SYSTEMS INTERCONNECTION—BASIC REFERENCE
MODEL: THE BASIC MODEL § 5.2, at 6–8 (1994), http://www.itu.int/rec/dologin_pub.asp?
lang=e&id=T-REC-X.200-199407-I!!PDF-E&type=items.
  168. See STEVE MCCONNELL, CODE COMPLETE: A PRACTICAL HANDBOOK OF SOFTWARE
CONSTRUCTION 94–108 (2d ed. 2004); HERBERT A. SIMON, THE SCIENCES OF THE ARTIFICIAL
195–200 (3d ed. 1996).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2824                        FORDHAM LAW REVIEW                                   [Vol. 78

made up of a sequence of discrete IP datagrams (“network” layer), which in
turn are moved from one computer to the other using a network protocol
such as Ethernet (“link” layer) that is tailored to run well on specific
hardware like a Category 5e twisted-pair copper cable (“physical” layer).
The e-mail program doesn’t need to know how Ethernet works, and vice-
versa.169
   For our purposes, one layer is more equal than the others: the network
layer, where IP is universal.170 Lower layers have a diversity of networking
protocols and hardware; higher layers have a diversity of transports and
applications. But there is only one network-layer protocol worthy of note
on the Internet: IP.171 Everyone uses it. The Internet itself can be defined
as the global network of computers using IP to communicate with each
other.172
   As legal scholars have recognized, layering has policy implications.173
In particular, it permits different resource allocation regimes at different
layers. The same network may be fully private at the physical layer (only
the company IT manager can enter the room with the server), a limited-
access common-pool resource at the link layer (only employees in the
building can connect to it, but they can do so freely), a governed open-to-
the-world common-pool resource at the application layer (the company
allows outside e-mail connections but filters them for spam), and mixed
commons and private at the content layer (outside users send the employees
both proprietary company documents and freely shared jokes). These
different regimes coexist: the same physical network is simultaneously
participating in all of them.174 Any given electrical signal is meaningful at
multiple layers.


  169. See TANENBAUM, supra note 167, at 37–71.
  170. To be more precise, this claim would also include the ancillary routing protocols,
such as the Border Gateway Protocol (BGP) and the Routing Information Protocol (RIP),
that tell IP-implementing systems which other computers they should forward IP traffic
through. See COMER, supra note 143, at 249–93 (describing BGP and RIP).
  171. The current version of IP in broad use is version 4; there is a worldwide effort
underway to upgrade to version 6. But IP’s universality makes this upgrade both technically
challenging and politically contentious: any widely adopted change to IP will change the
nature of the Internet itself. See DENARDIS, supra note 142, at 4; LAWRENCE LESSIG, CODE:
AND OTHER LAWS OF CYBERSPACE 207–09 (1999) (“[W]e again will have to decide whether
this architecture of regulability is creating the cyberspace we want. A choice. A need to
make a choice.”).
  172. See James Grimmelmann & Paul Ohm, Dr. Generative—Or How I Learned To Stop
Worrying and Love the iPhone, 69 MD. L. REV. (forthcoming 2010) (book review).
  173. See, e.g., LAWRENCE LESSIG, THE FUTURE OF IDEAS: THE FATE OF THE COMMONS IN A
CONNECTED WORLD 23–25 (2001); Lawrence B. Solum & Minn Chung, The Layers
Principle: Internet Architecture and the Law, 79 NOTRE DAME L. REV. 815, 818–20 (2004);
Kevin Werbach, A Layered Model for Internet Policy, 1 J. ON TELECOMM. & HIGH TECH. L.
37, 57–60 (2002); Timothy Wu, Application-Centered Internet Analysis, 85 VA. L. REV.
1163, 1188–93 (1999).
  174. See Solum & Chung, supra note 173, at 838–44 (discussing coexistence of multiple
layers).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2825

   IP plays a critical role in this system of overlapping regimes; it is the
layer on which the Internet is most fully a commons. Beneath are privately
owned hardware, managed networks, and (sometimes) proprietary
protocols. Above it are specialized protocols that form direct connections
between individual computers, tightly controlled applications, and
copyrighted content. But IP itself is wide open: specify the IP address of a
system you’d like your datagram to be delivered to, and dozens of
computers along the way will voluntarily help get it there. Most of the
time, no one asks for payment; no one inspects the contents; no one asks to
see your signed authorization. This makes IP not just a universal layer
sandwiched between more fragmented ones,175 but also a commons layer
sandwiched between more private ones.
   It was an inspired design decision; IP has a lot of nice desiderata for a
commons. A Comedic commons should be as large as possible, as easy as
possible to join, as minimally demanding as possible on its participants, and
as flexible as possible in the uses to which it can be put. IP checks every
one of those boxes. Not only is it universal in the sense of being widely
used, it’s also universal in the sense of being able to run on any kind of
network.176 It’s also a remarkably simple protocol; all it does is move
datagrams from point A to point B.177 That makes it easier to implement
and reduces the need for explicit coordination, both factors that make it
easier to participate in the IP commons.178 Because IP’s simplicity also
precludes specialization for particular uses, it’s suitable for almost every
use.179 It can be a jack of all trades by being a master of none.180 The IP
Internet is thus both easy for private infrastructure owners to join and easy
for them to use profitably once they’ve joined.
   This leaves, however, the problem of strategic behavior identified by
Smith. That the Internet has a billion users is a sign of success, but that it
has hundreds of millions of computers is a sign of challenge—that’s a
massive amount of resources to be devoting to the Internet semicommons.
Every use of the IP commons imposes very real costs on the private
infrastructure beneath it, and the natural question is why those costs don’t

   175. See ZITTRAIN, supra note 2, at 67–71 (discussing “hourglass architecture” of IP).
   176. See D. WAITZMAN, RFC 1149, A STANDARD FOR THE TRANSMISSION OF IP
DATAGRAMS ON AVIAN CARRIERS (1990), http://tools.ietf.org/html/rfc1149 (explaining how
to implement IP using carrier pigeons).
   177. See POST, supra note 1, at 82–89 (discussing the simplicity of IP).
   178. See M. MITCHELL WALDROP, THE DREAM MACHINE: J.C.R. LICKLIDER AND THE
REVOLUTION THAT MADE COMPUTING PERSONAL 379–80 (2001) (discussing how IP’s
simplicity made it easier to connect networks and aided its adoption).
   179. See David Isenberg, Rise of the Stupid Network, J. HYPERLINKED ORG., June 1997,
http://www.hyperorg.com/misc/stupidnet.html (“The Stupid Network would let you send
mixed data types at will . . . . You would not have to ask your Stupid Network provider for
any special network modifications . . . . A rudimentary form of the Stupid Network—the
Internet—is here today.”).
   180. See J.H. Saltzer et al., End-to-End Arguments in System Design, 2 ACM
TRANSACTIONS ON COMPUTER SYS. 277, 278 (1984).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2826                         FORDHAM LAW REVIEW                                    [Vol. 78

overwhelm it. Routers can become overwhelmed with heavy traffic and
drop packets; network links can become saturated; servers can crash.
   This isn’t merely a theoretical concern. Strategic behavior is everywhere
in the Internet semicommons. Network backbone operators routinely route
packets in ways designed to dump as much of the cost as possible on each
other.181 Virus, worm, and malware authors use the commons to deliver
their malicious software, hijack users’ private computers, and send out
spam—which in turn puts costs on private mail servers and actually
degrades the content commons by polluting it with unwanted, distracting
messages.182 We also see the wasteful precautionary costs identified by
Smith: spam filters,183 CAPTCHAs,184 firewalls,185 and so on are the
Internet equivalents of farmers sitting out in the rain watching to see where
the sheep are driven.
   This is all rather discouraging, but, as Galileo apocryphally said, eppur si
muove.186 The Internet does work, fortunes are made on it, and millions of
afternoons are enjoyably frittered away reading Twilight fan fiction. The
benefits from combining private and common uses online must outweigh
the costs, notwithstanding all of these abuses, or the Internet would not
exist.
   Layering is one reason that the costs of strategic behavior and fencing are
manageable.        IP’s simplicity creates a kind of forced sharing of

   181. For example, network A will hand off packets destined for network B’s users as
soon as possible, so that network B does the bulk of the work to deliver them. See JONATHAN
E. NUECHTERLEIN & PHILIP J. WEISER, DIGITAL CROSSROADS:                             AMERICAN
TELECOMMUNICATIONS POLICY IN THE INTERNET AGE 42–44 (2005).
   182. See ZITTRAIN, supra note 2, at 36–51. Of course, we might classify these content-
level costs as burdens on the private resources of users’ attention, but saying that this is a
cost imposed on the commons, even if less descriptively precise, is clearer in terms of
pinpointing the problem.
   183. See David Pogue, On the Job, a Spam Fighter Is Learning, N.Y. TIMES, Mar. 30,
2006, at C1 (describing the Spam Cube, a home device to filter spam). The Spam Cube, like
other spam-fighting technologies, is costly in two different ways. It costs $150, and along
with the spam it catches, it also blocks legitimate emails. Id.
   184. See ReCAPTCHA, What Is a CAPTCHA?, http://recaptcha.net/captcha.html (last
visited Apr. 10, 2010) (“A CAPTCHA is a program that can generate and grade tests that
humans can pass but current computer programs cannot.”). But CAPTCHAs are costly, too.
See BaltTech, Towson U., National Federation of the Blind Re-Invent CAPTCHA,
http://weblogs.baltimoresun.com/news/technology/2009/11/towson_u_national_federation_o.html
(Nov. 18, 2009, 8:18 EST) (quoting computer science professor Jonathan Lazar as saying,
“[b]asically, computer viruses are twice as successful as blind people on the old captchas.”).
   185. See WILLIAM R. CHESWICK ET AL., FIREWALLS AND INTERNET SECURITY: REPELLING
THE WILY HACKER, at xviii (2d ed. 2003) (defining “firewall gateway” as a dedicated
computer that is the only one on a network to communicate with the outside world); id. at
173–96 (describing how firewalls can provide specialized security). It’s costly to add a
dedicated computer to a network for no other purpose than security—to say nothing of the
expense of configuring and monitoring it, or buying books like Firewalls and Internet
Security.
   186. “And yet it moves.” STEPHEN HAWKING, ON THE SHOULDERS OF GIANTS: THE GREAT
WORKS OF PHYSICS AND ASTRONOMY 393 (2002) (quoting Galileo Galilei). “[M]ost
historians regard the story as a myth.” Id.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                      2827

infrastructure for users. A former housemate used to saturate our Internet
connection running a peer-to-peer Gnutella node. It brought my Web
surfing to a crawl—but it also brought his Web surfing to a crawl. There
was no way for him to reach down further in the protocol stack and
prioritize his physical computer over mine. It wasn’t full internalization of
the costs he was creating for the rest of us—but it was enough to convince
him to moderate his file sharing once he figured out the connection.
   Layering also has salutary boundary-setting effects.187 Because the IP
layer hides those beneath it, it functions like scattering in preventing
commons users from strategically targeting specific private pieces of
infrastructure. It’s impossible to know with certainty the path that a packet
will follow, and if one link fails under overload, the packets will flow along
another route. You can graze your traffic across particular networks, but
you can’t easily park all of it in one spot. Conversely, John Gilmore’s quip
that “[t]he Internet interprets censorship as damage and routes around it”188
captures the point that it’s difficult to impinge on commons uses by
targeting specific pieces of the private infrastructure. Take out one node
and the Internet’s overall flows will be largely unaffected.
   Looking upwards in the protocol stack rather than down, as long as my
ISP and other infrastructure providers implement IP without violating its
layering abstractions—that is, as long as they don’t “look inside” the IP
packets—they have almost no choice but to provide a “neutral” network.189
Layering becomes an architectural constraint that prevents them from
selectively choosing to block, alter, or slow my communications. That
limits the power of these private owners to engage in self-interested
bargaining with commons users; they can’t go to Google and demand a
premium for allowing its traffic to pass, or slow down all video content, or
otherwise start tinkering with commons uses.190

                          B. User-Generated Content Sites
   Next, consider a large and important category of Internet activity: UGC
sites. All of these sites face the Internet’s semicommons problem in
miniature. They typically run on private infrastructure supplied by a single
entity, but anyone in the world can view and contribute to them.191 Some

   187. Solum & Chung, supra note 173, discuss at length the policy virtues of respecting
boundaries between layers.
   188. Peter H. Lewis, Limiting a Medium Without Boundaries, N.Y. TIMES, Jan. 15, 1996,
at D4.
   189. Solum & Chung, supra note 173, at 829–31, 936–42.
   190. This is not to enter the debates on network neutrality as a matter of policy. My point
is merely that layering as a form of boundary-setting limits certain forms of self-interested
behavior by private owners; evaluating whether this is a good thing or a bad thing would
require more analysis than can fit in this margin.
   191. This is another layered system, but note that the private/common division is different
than the one discussed in the previous section. Here, the system is held in common at the
content layer, but is essentially private at every lower layer.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2828                         FORDHAM LAW REVIEW                                     [Vol. 78

are tiny, like my blog—laboratorium.net—which runs on a server operated
by a group of my friends and has a few comments per day.192 Others are
gigantic, like YouTube, which runs on a massive Google server farm and
serves up over a billion video views daily.193
   Once more, we wouldn’t observe the semicommons form unless it were
worthwhile. If the juxtaposition of private and common uses weren’t
creating value, then either the private owners of the servers would turn them
off or the common users would stop participating.194 In view of Smith’s
analysis of semicommons incentives, we should therefore ask how these
sites produce value and how they keep costs under control.
   The first answer is that on the Web, normal commons use can often be a
source of value for server owners rather than a net cost. Consider my blog,
for which I pay about $250 a year. Each pageview and comment costs me
something, true, but it gives me more in return. By allowing readers to
access my blog, I spread my ideas to a larger audience. By allowing
comments, I learn from them. Even server owners motivated only by
money can reap value from free commons use. The secret, as the first
commercial bloggers discovered, is advertising.195 More users mean more
ad revenue, so that users are like pooping sheep: well worth the bother.196
   A site that is free to its users can take advantage of powerful Comedic
effects. Any nonzero price requires some form of signup, login, and billing;
in addition to being costly to implement, these exclusion mechanisms are

   192. The Laboratorium, http://laboratorium.net/ (last visited Apr. 10, 2010).
   193. See Posting of Chad Hurley to YouTube Blog, http://youtube-global.blogspot.com/
2009/10/y000000000utube.html (Oct. 9, 2009).
   194. In this respect, these particular Internet semicommons are more susceptible to
Demsetzian explanations than many offline property systems and legal regimes, where the
problem of collective action looms larger. See Harold Demsetz, Toward a Theory of
Property Rights, 57 AM. ECON. REV. PROC. 347 (1967) (describing evolution of property
rights as efficient response when value from more intensive use increases). Scholars,
however, have raised difficult questions about the mechanism by which this evolution would
take place. See, e.g., Saul Levmore, Two Stories About the Evolution of Property Rights, 31
J. LEGAL STUD. S421, S425–33 (2002) (noting ambiguity between optimistic Demsetzian
story of the evolution of efficient property regimes and pessimistic story about selfish
interest groups capturing value for themselves). A UGC site, however, as a resource, doesn’t
preexist the semicommons form (so that its users can hardly be accused of appropriating a
commons for their exclusive use), and its users make individual voluntary decisions to take
part when the rewards outweigh the costs (thus providing a straightforward mechanism for
the collective decision to use a particular governance regime). This isn’t to say that UGC
sites are free of interest-group dynamics, or that they don’t face collective action dilemmas,
only that their initial development of a property system may pose less of a puzzle than the
development of property systems in purely tangible online resources.
   195. See ROSENBERG, supra note 106, at 178–85.
   196. Sometimes, as with Twitter, the ad revenue isn’t there yet (and may not ever be, if
the skeptics are to be believed), but the prospect of monetizing the eyeballs justifies the up-
front expenses of building the community. See A & M Records, Inc. v. Napster, Inc., 114 F.
Supp. 2d 896, 902, 921–22 (N.D. Cal. 2000) (basing a holding of vicarious copyright
infringement on the argument that although Napster had no present revenue, a larger user
base would give it greater future revenue potential), aff’d in part, rev’d in part, 239 F.3d
1004 (9th Cir. 2001).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                       2829

surprisingly strong psychological impediments to participation.197 That
means a huge, discontinuous jump in usage when access is truly open. On a
site where users interact with each other in creative ways, this spike in
usage has powerful feedback effects. My commenters don’t just respond to
me; they also riff on each others’ thoughts. It is, in short, the Comedic
story all over again: creating a community of sharers produces value for
everyone involved.
   The second piece of the puzzle is that the server owner doesn’t disappear
from the picture entirely. She retains residual exclusionary power.198 She
may not exercise it ex ante, at least in the first instance. But she can and
regularly does exercise it ex post, to target specific instances of abuse. If
YouTube users flag an inappropriate video, YouTube will yank it.199 If I
see a spam comment on my blog, I delete it. If the abuses are flagrant
enough, the user is likely to be kicked off the site entirely—and eventually,
to be blocked at the IP address level.200
   In other words, YouTube and I are paying the monitoring costs to watch
what commons users do on our private pieces of the Web. We’re using
governance to control behavior and self-help exclusion to enforce our
decisions. The openness of our sites creates a classically Tragic scenario;
we use our platform power to deter misuse.201 We’re willing to pay these
costs because the overall benefits to us of common usage are larger still.


   197. Cf. Anderson, supra note 154, at 146.
   198. Lest this seem unremarkable, keep in mind that it would be nearly inconceivable for
an Internet backbone provider to decide sua sponte that it needed to block traffic from a
particular IP address, and that when Comcast started blocking particular traffic, it drew an
FCC investigation and injunctive relief. Formal Complaint of Free Press & Pub. Knowledge
Against Comcast Corp. for Secretly Degrading Peer-to-Peer Applications, 23 F.C.C.R.
13,028 (2008).
   199. See Google Help, YouTube Glossary:                         Flag as Inappropriate,
http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=95403 (last visited
Apr. 10, 2010).
   200. See, e.g., Wikipedia:          Blocking IP Addresses, http://en.wikipedia.org/wiki/
Wikipedia:Blocking_IP_addresses (last visited Apr. 10, 2010).
   201. See Jonathan Zittrain, The Rise and Fall of Sysopdom, 10 HARV. J.L. & TECH. 495,
501–06 (1997) (discussing role played by amateur “sysops” of online forums, newsgroups,
and bulletin boards in fostering community). Zittrain’s message in 1997 was pessimistic; he
saw the sysop as a dying breed presiding over fragile communities. The Future of the
Internet is far more optimistic about the potential of bottom-up collaboration and altruistic
community creation in creating a healthy online society. One possible difference between
then and now, I would submit, is that the benefits of linking these communities together on
the Internet—putting the “commons” in “semicommons”—are much clearer today.
Zittrain’s invocation of the “sysop” also leads us off into the world of bulletin-board systems
(or “BBSes”). Time and space constraints don’t permit me to discuss them in detail as an
additional example of an online organizational form. Their basic model, however—privately
owned servers, connected to the telephone network, accessible to anyone who wished to dial
in via modem—fits the basic semicommons pattern described in this essay, and their history
also illustrates the applicability of Smith’s model. For more on BBSes, see generally BBS:
THE DOCUMENTARY (Bovine Ignition Systems 2005); Textfiles.com, History,
http://www.textfiles.com/history/ (last visited Apr. 10, 2010).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                   4/29/2010 11:26 PM




2830                        FORDHAM LAW REVIEW                                  [Vol. 78

   The fact that we have this residual platform-based power and are willing
to use it creates its own countervailing dangers of strategic behavior. I
could delete comments from people who disagree with me. YouTube could
make it impossible for users to get their videos off the site. These are
private uses that impose costs on the common uses, and they’re well studied
in the literature (if not usually in these terms).202 Abating these costs is
itself a question in institution design.         But note that within the
semicommons framework we can still describe the problem as overall cost
minimization across a number of different forms of strategic behavior and
strategic behavior prevention.
   A third critical point about these sites is that their success depends on
their users. A coherent and motivated user can collectively exercise
governance to prevent abuse. Some users guard the private infrastructure
from commons overuse. Craigslist knows which posts are spam because its
users tell it.203 Other users defend the commons from its own enemies.
Wikipedia’s first line of defense against lies and propaganda is eagle-eyed
users who look for self-interested or bad-faith edits and undo them.204 A
strong user community can even defend the commons from the private
infrastructure owner, as Facebook discovered when it tried to introduce
privacy-invading advertising technologies.205
   Scale matters in this story. The Tragic story reminds us that smaller
groups will be better at self-monitoring and enforcement. And that’s
exactly what we see on the Internet, after a fashion. “The Internet” as a
whole doesn’t have a generic “flag this content for removal” button.
Instead, the pruning and weeding that help UGC sites flourish take place on
those sites. The division of the Web into distinct “sites” makes it easier for
close-knit communities to form.
   These local communities, in turn, are coupled to each other in important
ways.206 Individual blogs are connected to each in a network of linking,


   202. See, e.g., Jack M. Balkin, Digital Speech and Democratic Culture: A Theory of
Freedom of Expression for the Information Society, 79 N.Y.U. L. REV. 1 (2004) (discussing
censorship powers of platform owners); James Grimmelmann, Saving Facebook, 94 IOWA L.
REV. 1137, 1192–95 (2009) (discussing platform lock-in). There’s also the larger question
of the proper division of value between private owner and commons users. One could argue
that the platform owner who becomes rich off of user contributions is engaged in a form of
digital exploitation. See, e.g., Søren Mørk Petersen, Loser Generated Content: From
Participation to Exploitation, FIRST MONDAY, Mar. 6, 2008, http://www.uic.edu/htbin/
cgiwrap/bin/ojs/index.php/fm/article/view/2141/1948. As this is a normative question, not
an analytic one, I put it aside for the time being.
   203. Craigslist, Flags and Community Moderation, http://www.craigslist.org/about/help/
flags_and_community_moderation (last visited Apr. 10, 2010).
   204. See ZITTRAIN, supra note 2, at 131–42 (discussing Wikipedia).
   205. Louise Story & Brad Stone, Facebook Retreats on Online Tracking, N.Y. TIMES,
Nov. 30, 2007, at C1.
   206. See generally DAVID WEINBERGER, SMALL PIECES LOOSELY JOINED: HOW THE WEB
SHOWS US WHO WE REALLY ARE (2002) (describing the Internet’s success in terms of this
loose coupling of small components in multiple domains).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2831

quotation, and conversation.207 YouTube took off because it offered users
the trivially easy capacity to embed videos in other Web pages—that is, to
bridge the YouTube community and others.208 This is a modular structure:
tight community coupling within a site and looser coupling across sites.209
This arrangement makes each site’s Tragic internal governance problem
more manageable while also facilitating Comedic conversations across
sites.210
   The same is also true within large sites. Wikipedia has many
“WikiProjects”: groups of pages on a similar topic.211 They tend to have
common groups of editors who focus on them. Bingo: a small and close-
knit community, loosely coupled to others within Wikipedia. Similarly,
social network sites divide the world into small networks centered around
every user, forming overlapping communities. These Internet institutions
bridge the optimal scales for private and common uses.

                          C. Usenet and Boundary-Setting
   Semicommons theory explains the Internet’s failures as well as its
successes. Only those online institutions that can cost-effectively deter
strategic behavior at the interface between private and common will prosper
at the planetary scale of the Internet. Those that can’t will stagnate rather
than grow—or even collapse entirely under the strain of a worldwide
semicommons.
   As an example of a failed online semicommons, consider Usenet, a
distributed set of message boards.212 Its use of interconnected servers to

   207. See ROSENBERG, supra note 106, at 205–06; Posting of James Grimmelmann to
LawMeme, http://lawmeme.research.yale.edu/modules.php?name=News&file=print&sid=1155
(June 18, 2003, 4:03 EDT).
   208. See Posting of Deepak Thomas and Vineet Buch to Startup Review,
http://www.startup-review.com/blog/youtube-case-study-widget-marketing-comes-of-age.php
(Mar. 18, 2007).
   209. See SIMON, supra note 168, at 197–205 (describing common pattern of tightly
coupled modules themselves loosely coupled to each other); Mark S. Granovetter, The
Strength of Weak Ties, 78 AM. J. SOC. 1360 (1973) (describing power of loose links to bridge
different social groups).
   210. This point may have implications for Zittrain’s goal of stopping malware through
“suasion” and “experimentation.” ZITTRAIN, supra note 2, at 173. Zittrain’s discussion of
the challenges and goals of the StopBadware project clearly recognizes the dangers of both
too much and not enough private control, at multiple scales. Id. at 168–73. Sites
experimenting with security policies in an informed way are private and Tragic. Internet-
wide monitoring and information-sharing are common and Comedic. See id.
   211. See Wikipedia: WikiProject, http://en.wikipedia.org/wiki/WikiProject (last visited
Apr. 10, 2010) (“A WikiProject is a collection of pages devoted to the management of a
specific topic or family of topics within Wikipedia; and, simultaneously, a group of editors
who use those pages to collaborate on encyclopedic work.”).
   212. On the technology and operation of Usenet (sometimes also written as “USENET”),
see generally JENNY A. FRISTUP, USENET: NETNEWS FOR EVERYONE (1994); MARK
HARRISON, THE USENET HANDBOOK: A USER’S GUIDE TO NETNEWS (1995); ED KROL, THE
WHOLE INTERNET: USER’S GUIDE & CATALOG (2d ed. 1994); TIM O’REILLY & GRACE
TODINO, MANAGING UUCP AND USENET (10th ed. 1992); and BRYAN PFAFFENBERGER, THE
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                     4/29/2010 11:26 PM




2832                         FORDHAM LAW REVIEW                                   [Vol. 78

create shared worldwide “newsgroups” made it a thriving semicommons
through the 1980s.213 But this same structure couldn’t cope with the
abuses caused by the Internet’s exponential takeoff in the 1990s.214 A
comparison of Usenet with e-mail and UGC sites shows that they did a
better job of boundary-setting.215 Semicommons theory explains how
different design choices can help one online institution succeed where
another fails.
                                1. How Usenet Works
  Individual Usenet users post and read messages on a local server on
which they have an account; these servers then exchange new messages



USENET BOOK: FINDING, USING, AND SURVIVING NEWSGROUPS ON THE INTERNET (1994).
For discussion of its culture and sociology, see generally MICHAEL HAUBEN & RHONDA
HAUBEN, NETIZENS: ON THE HISTORY AND IMPACT OF USENET AND THE INTERNET (1997);
HOWARD RHEINGOLD, THE VIRTUAL COMMUNITY: HOMESTEADING ON THE ELECTRONIC
FRONTIER 117–31 (1993); and CLAY SHIRKY, VOICES FROM THE NET 80–89 (1995). Within
the law review literature, see generally A. Michael Froomkin, Habermas@Discourse.net:
Toward a Critical Theory of Cyberspace, 116 HARV. L. REV. 749, 821–31 (2003); Paul K.
Ohm, On Regulating the Internet: Usenet, a Case Study, 46 UCLA L. REV. 1941 (1999);
David G. Post, Pooling Intellectual Capital: Thoughts on Anonymity, Pseudonymity, and
Limited Liability in Cyberspace, 1996 U. CHI. LEGAL F. 139, 163 n.54; and Charles D. Siegal,
Rule Formation in Non-hierarchical Systems, 16 TEMP. ENVTL. L. & TECH. J. 173, 181–83,
190–99 (1997). See             also   Eric   Schlachter,     War    of    the   Cancelbots!,
http://eric_goldman.tripod.com/articles/cancelbotarticle.htm (last visited Apr. 10, 2010).
Usenet has made sporadic appearances in the case reports. Highlights with significant
factual discussion of Usenet include Arista Records LLC v. USENET.com, Inc., 633 F.
Supp. 2d 124, 129–31 (S.D.N.Y. 2009); Ellison v. Robertson, 189 F. Supp. 2d 1051, 1053–
54 (C.D. Cal. 2002), rev’d, 357 F.3d 1072 (9th Cir. 2004); ACLU v. Reno, 929 F. Supp. 824,
834–35 (E.D. Pa. 1996), aff’d, 521 U.S. 844 (1997); and Religious Tech. Ctr. v. Netcom On-
Line Commc’n Servs., Inc., 907 F. Supp. 1361, 1366 n.4, 1367–68 (N.D. Cal. 1995). Purists
may insist that “Usenet” refers only to one particular set of newsgroups, and that “Network
News” is the correct umbrella term that also includes local newsgroups and even a few
alternative hierarchies. See, e.g., KROL, supra, at 151–57. In practice, though, users often
also referred to these other hierarchies as “Usenet.” See id. at 452. Similarly, one could
technically distinguish between the higher-level protocol governing Usenet’s messages and
newsgroups, see M. HORTON & R. ADAMS, RFC 1036 STANDARD FOR INTERCHANGE OF
USENET MESSAGES (1987), http://tools.ietf.org/html/rfc1036 [hereinafter RFC 1036], and
the lower-level protocols governing how those messages are transferred from one computer
to another. See MARK R. HORTON, RFC 976, UUCP MAIL INTERCHANGE FORMAT STANDARD
(1986), http://tools.ietf.org/html/rfc976; BRIAN KANTOR & PHIL LAPSLEY, RFC 977,
NETWORK NEWS TRANSFER PROTOCOL (1986), http://tools.ietf.org/html/rfc1036. But in
practice, the same social conventions causing users and administrators to standardize on the
one protocol also led them to standardize on the other. As Paul Ohm puts it, “Just as you can
get from downtown to Westwood without a car, you can communicate via Usenet without
NNTP [Network News Transfer Protocol]. But most people would not take this trip without
a car, just as most people do not use USENET except over NNTP.” Ohm, supra, 1949–50
n.28 (citing RFC 1036, supra, § 4).
   213. See infra Part IV.C.1.
   214. See infra Part IV.C.2.
   215. See infra Part IV.C.3.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                     4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                     2833

with each other via a peer-to-peer protocol.216 Each server talks only to a
few others, but most Usenet servers are linked together so that any given
message will eventually be propagated to all servers in the network.217
   This technical structure coexists with Usenet’s semantic structure: a
hierarchy of topical “newsgroups.”218 That hierarchy, established in 1987
in a coordinated event known as the “Great Renaming,” divides the Usenet
universe into “newsgroups” such as soc.culture.welsh (on Welsh culture)
and sci.math (on mathematics).219 Individual messages are typically posted
to a single newsgroup, but replicated across the whole network of
servers.220    The administrators of individual servers decide which
newsgroups to carry on their servers.221
   Usenet also sported a higher level of governance (of the sort detailed by
Ostrom)222 in its institutions for collective decision making about which
newsgroups to support.223 A centralized board coordinated the process: a
proposed new newsgroup (or proposed deletion of an old one) would be
publicly announced, discussed, and put to a vote. The certified results of
this process were generally accepted as legitimate by server operators: a
proposed newsgroup that won its vote would typically be added by enough
servers that it would achieve critical density in the network and connect
those users who wanted to join it.224
   Once again, the semicommons structure is evident. Each server is a
private use; each newsgroup is a common use. The two are inextricably
intertwined. The worldwide network of servers gives each newsgroup a


  216. This exchange originally took place through direct telephone-line connections
between Usenet servers; as the Internet became more widely available, the exchanges
gradually switched over to using the Internet for their transport. See HAUBEN & HAUBEN,
supra note 212, 39–40, 44; RHEINGOLD, supra note 212, at 120–21.
  217. See RFC 1036, supra note 212, § 5 (describing algorithm for propagation of Usenet
messages through network).
  218. See KROL, supra note 212, at 153–54.
  219. See Henry Edward Hardy, The History of the Net (Sept. 20, 1993) (unpublished
master’s thesis, Grand Valley State University), available at http://w2.eff.org/Net_culture/
net.history.txt. On the topical nature of particular newsgroups, see generally the archive of
Usenet Frequently Asked Question (FAQ) files at www.faqs.org, which are organized by
newsgroup.
  220. See Ohm, supra note 212, at 1945–47 (describing posting to newsgroups); id. at
1949–50 (describing replication). On server operation, see generally O’REILLY & TODINO,
supra note 212.
  221. See KROL, supra note 212, at 157 (“Last, we must deal with (how can I write this
delicately?) censorship. Some administrators decide that some groups (especially in the alt
category) are not for consumption by the server’s clientele. So they choose not to carry
them.”). Since, by 2002, Usenet carried 1000 gigabytes of data per day, see Froomkin,
supra note 212, at 822, some prioritization of which groups to carry was a technical
necessity. See KROL, supra note 212, at 156 (“A server administrator may choose not to
accept a certain group because it is very active and eats up too much disk space.”).
  222. OSTROM, supra note 62, at 101–02 (discussing importance of “nested enterprises”).
  223. See Froomkin, supra note 212, at 823–24.
  224. Id. at 824–25.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2834                        FORDHAM LAW REVIEW                                   [Vol. 78

global reach,225 but each server remains locally owned and operated. By
2002, Usenet was carrying over 1000 gigabytes of messages a day.226
That’s both a remarkable volume of shared content and a tremendous
technical burden on each server.
   Through the 1980s and into the mid-1990s, Usenet was a highly
successful semicommons, reaching more than 2.5 million people by
1992.227 Both boundary-setting devices—between servers and between
newsgroups—played a role. Dividing Usenet across servers (rather than
centralizing it) made it technically feasible and enabled it to grow by
accretion as individual server operators connected and joined the
semicommons.228 Meanwhile, dividing Usenet into newsgroups supported
the formation of smaller communities capable of exercising good internal
governance. Strong social norms of netiquette discouraged off-topic posts,
for example: a post about football in sci.math would draw a scolding.229
The norms of sci.math and the norms of soc.culture.welsh could be
different, making both stronger.
                                2. How Usenet Failed
  But past performance is no guarantee of future results, and Usenet didn’t
deal well with the Internet’s massive surge in popularity during the 1990s.
As the number of new Internet users increased exponentially year after year,
so did the technical and social strains on the Usenet semicommons.230 The
most visible form of abuse was spam: messages (usually, but not
exclusively, commercial) posted to thousands of newsgroups at once. It
placed enormous technical burdens on the private infrastructure and
substantially degraded the readability of the common newsgroups.231 Both



   225. See RHEINGOLD, supra note 212, at 130 (“Usenet is a place for conversation or
publication, like a giant coffeehouse with a thousand rooms; it is also a worldwide digital
version of the Speaker’s Corner in London’s Hyde Park, an unedited collection of letters to
the editor, a floating flea market, a huge vanity publisher, and a coalition of every odd
special-interest group in the world.”).
   226. Froomkin, supra note 212, at 822.
   227. See RHEINGOLD, supra note 212, at 120. The years of publication of the books on
Usenet cited in footnote 212 are telling: 1992, 1993, 1994, 1994, 1994, 1995, 1995, and
1997. See supra note 212.
   228. See RHEINGOLD, supra note 212, at 119 (“All you had to do to join Usenet was to
obtain the free software, find a site to feed you News and take your postings, and you were
in action.”).
   229. See S. HAMBRIDGE, RFC 1855, NETIQUETTE GUIDELINES (1995), http://tools.ietf.org/
html/rfc1855.
   230. In Internet folklore, at least, the tide turned in 1993, as commercial services like
Delphi and AOL began offering their millions of subscribers access to Usenet newsgroups,
an event known as the “Eternal September”—a never-ending stream of new users as
unfamiliar with Usenet’s norms as the annual crop of college first-years had been. See
WENDY M. GROSSMAN, NET.WARS 9–11 (1997).
   231. See Froomkin, supra note 212, at 825–29.
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                     4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                     2835

Usenet’s property boundaries and its institutions proved incapable of
dealing with this influx.
   Architecturally, Usenet got the property boundaries wrong. Each
meaningful community—a newsgroup—was split across many pieces of
private infrastructure—servers—and vice-versa. This form of scattering
inhibited opportunism by censorious private server operators: other servers
could exchange a message even if one server deleted it,232 and its own users
could switch Usenet providers.233 But it also meant that neither private
server operators nor commons newsgroup communities were in a position
to deal effectively with spam. The private infrastructure owners couldn’t
individually take effective action against heavily cross-posted spam and
garbage; they each had to monitor all of Usenet, and a message deleted on
one server would still crop up on the others.
   Meanwhile, the community of readers of a particular newsgroup being
overrun also had no good tools to stop the flood. Social norms collapsed
under the first sustained assault from outsiders. In 1994, a pair of
immigration lawyers advertised their services on over 6000 newsgroups.234
The outcry was remarkable: not just online condemnation, but also self-
help denial-of-service e-mail attacks, threats to the lawyers’ ISP, and “huge
numbers of magazine[] subscriptions in [the lawyers’] names.”235 Legal
scholars have discussed the remarkable vehemence of this response,236 but
it’s more a sign of weakness than of strength. Effective social norms don’t
require such extensive enforcement, precisely because they’re effective.
The immigration lawyers were outsiders to the newsgroup communities
they spammed, afraid of no threats the newsgroups could wield.237
   In any event, later events established that social norms were essentially
ineffective against spam. Other spammers soon followed, in large
numbers.238 The green-card lottery spam was the proof of concept—the
lawyers behind it even published a book of advice for other would-be


  232. See Giganews, Usenet Interview with John Gilmore, http://www.giganews.com/
usenet-history/gilmore.html (last visited Apr. 10, 2010) (“For example, the quote I seem to
be most famous for, ‘The net treats censorship as damage and routes around it’, came
directly out of my Usenet experience. I was actually talking about the Usenet when I first
said it. And that’s how the Usenet works—if you have three news feeds coming in, and one
of those feeds censors the material it handles, the censored info automatically comes in from
the other two.”).
  233. See KROL, supra note 212, at 132 (“If you are offended [by your server
administrator’s refusal to carry a newsgroup], you have two choices: find another server or
beat up on your administrator.”).
  234. Siegal, supra note 212, at 192.
  235. Id.
  236. Froomkin, supra note 212, at 827–28 (“[M]ass self-defense . . . .”); Siegal, supra
note 212, at 192–93 (“[E]xtreme self-help . . . .”).
  237. See Siegal, supra note 212, at 193 (“Moreover, Canter and Seigel, unbowed by their
role as outcasts on the Internet, published a book telling other would be cyber-entrepreneurs
how to profit by following their example . . . .”).
  238. See id. (“[S]pamming is common . . . .”).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                        4/29/2010 11:26 PM




2836                         FORDHAM LAW REVIEW                                      [Vol. 78

Usenet spammers, claiming they’d made over $100,000 at it.239 Efforts at
educating new users also proved futile.240 Without clear boundaries to
exclude outsiders and effective enforcement mechanisms, norm-based self-
governance runs into exactly the barriers described by Ostrom.
   In the late 1990s, as these failures were becoming obvious, Usenet
citizens (and a few legal scholars241) celebrated instead the possibility of
technical self-defense. One older technique was moderation: as on a
moderated listserv, messages would be sent to a newsgroup administrator,
and only posted after he or she approved them.242 While effective in
dealing with spam, moderation imposes substantial costs on the Comedic
potential of a group: it slows down messages as they wait for the
moderator’s approval, inhibiting conversation;243 it depends on the
volunteer labor of a moderator willing to perform this round-the-clock
job;244 and it allows the moderator to behave opportunistically, shaping or
censoring the flow of dialogue.245            Most Usenet groups were
unmoderated,246 and it’s not hard to think of a reason. Moderation doesn’t
scale.
   Newer techniques of technological self-defense fared little better.
Consider the killfile: a personal list of users whose messages you don’t
want to see.247 Your personal newsreading program hides those messages;
they remain on the server for others to read.248 The killfile sounds like a


   239. Id.
   240. Id. (“But these guidelines obviously do not deter those who seek either financial gain
or perverse pleasure from spamming.”).
   241. See Ohm, supra note 212, at 1941 (“[S]till too early for a legislature to
intervene . . . .”); Siegal, supra note 212, at 193–95.
   242. See FRISTUP, supra note 212, at 25; Ohm, supra note 212, at 1976–77.
   243. See Internet FAQ Archives, Moderated Newsgroups FAQ, http://www.faqs.org/
faqs/usenet/moderated-ng-faq/ (last visited Apr. 10, 2010) (“In general, hand-moderated
newsgroups often have some unavoidable delay . . . .”).
   244. See id. (“[A] typical setup for doing moderation would include . . . several hours of
spare time per week for at least a year . . . .”).
   245. See id. (“The discussion of differences between moderation and censorship has been
erupting several times a year in news.groups for about 15 years.”).
   246. See id. (noting a total of about 300 moderated Usenet newsgroups); cf. Froomkin,
supra note 212, at 822 (counting “thousands” of newsgroups overall).
   247. PFAFFENBERGER, supra note 212, at 193 (“The (Unfortunately Necessary) Art of the
Kill File . . . . The proper remedy for all the above-mentioned forms of net.abuse is the kill
file.”). Items could also be killed based on other criteria, such as the use of a particular
phrase. See KROL, supra note 212, at 166–69 (explaining use of keyword-based tagging and
killing to help newsgroup readers quickly browse topics). But the term “kill file” is most
colorfully used to describe user-based filtering. See ERIC S. RAYMOND, THE NEW HACKER’S
DICTIONARY 269 (3d ed. 1996) (“Thus to add a person (or subject) to one’s kill file is to
arrange for that person to be ignored by one’s newsreader in future. By extension, it may be
used for a decision to ignore the person or subject in other media.”).
   248. See Internet FAQ Archivers, rn KILL file FAQ § 1, http://www.faqs.org/faqs/
killfile-faq/ (last visited Apr. 10, 2010) (noting in passing that there is a killfile “for each
user”). The killfile works a bit like a personal filter that sends all messages from your crazy
cousin straight to an archive folder. Compare the nearly identical interfaces for “Rules”
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                       4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                       2837

perfectly speech-friendly system: the killfiler’s freedom not to read is
respected, and so is the killfilee’s freedom to speak and other users’
freedom to hear from her.249 And it does work well enough in small
communities for dealing with specific annoying users: a kind of virtual
silent treatment.250 But it fails in a larger commons.          Most of the
spammers and trolls are new users you’ve never heard of (and will never
hear from again).251 Nor does killfiling reduce the technical burdens felt by
server operators—the servers still need to carry the messages, even though
users ignore them.
   Cancelbots failed, too. They take their name from the fact that a Usenet
post author can send a follow-up message to “cancel” her original post,
thereby deleting it.252 These messages are easily forged, leading to a self-
help mechanism for dealing with spam and abuse: just forge a cancel
message for the offending post.253 As the spam problem grew, vigilante
Usenet users started automating the cancels, using programs called
“cancelbots.”254 In practice, though, cancelbots didn’t so much end the
Usenet spam wars as escalate them.255 Spammers vied to send out ads
faster than the vigilantes could cancel them; this competition was a wasteful
arms race.256 Worse, spammers themselves could use cancels against their


(including killing, marking as read, and sorting messages) in Unison (a newsreader) and
Mail.app (an e-mail client) for Mac OS X.
   249. See SHIRKY, supra note 212, at 82 (“Kill files perfectly illustrate the burden placed
on the reader on Usenet where freedom of speech is as absolute as its gets anywhere. For all
intents and purposes, anyone can say anything to anyone. If a certain kind of speech causes
upset, it is usually up to the reader not to read posts about those subjects or mail from those
people.”).
   250. This observation is based on the personal experience of the author. I would rather
not, for reasons that should be obvious, name the specific newsgroups and mailing lists on
which I have resorted to using a killfile.
   251. See, e.g., James “Kibo” Parry, Killfiles and You, http://www.kibo.com/kibokill/ (last
visited Apr. 10, 2010) (providing detailed suggestions for efficient use of a killfile). Note
the assumption that filtering out specific unwanted users will not suffice to make a
newsgroup readable; more detailed filtering is required.
   252. See Siegal, supra note 212, at 194; RFC 1036, supra note 212, § 3.1; Internet FAQ
Archives, Cancel Messages:             Frequently Asked Questions, Part 2/4 (v1.75),
http://www.faqs.org/faqs/usenet/cancel-faq/part2/ (last visited Apr. 10, 2010) [hereinafter
Cancel FAQ, Part 2/4].
   253. See Siegal, supra note 212, at 194; Internet FAQ Archives, Cancel Messages:
Frequently Asked Questions, Part 1/4 (v1.75), at I–II, http://www.faqs.org/faqs/usenet/
cancel-faq/part1/ (last visited Apr. 10, 2010) (discussing “third-party” cancels, including
“forged” cancels).
   254. Cancel FAQ, Part 2/4, supra note 252, at IV.C. (“A cancelbot is a program that
searches for messages matching a certain pattern and sends out cancels for them; it’s
basically an automated cancel program, run by a human operator.”).
   255. See GROSSMAN, supra note 230, at 75–78 (discussing technical back-and-forth
between message posters and message cancelers).
   256. See Froomkin, supra note 212, at 829–31 (discussing “Usenet Death Penalty” in
which a site considered to be too lax in stopping spam “has every single Usenet post
originating from it immediately canceled or at least not forwarded. Thus, every person using
that ISP loses the ability to post to Usenet regardless of his or her guilt or, in most cases,
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2838                         FORDHAM LAW REVIEW                                    [Vol. 78

enemies257 (leading to the use of pseudonymous anti-spam entities like the
Cancelmoose[tm]).258 Worse still, griefers259 could use cancels against
completely innocent Usenet posters260—so that many server administrators
simply ignored cancels entirely.261
   Perhaps some larger institution could have developed coherent
cancelation policies and consistently applied those policies across Usenet’s
mishmash of servers and newsgroups.262 But Usenet’s existing institutions
were too weak and too distrusted to develop and enforce such policies on
the diverse and dispersed Usenet community.263 Different newsgroups had
different norms; different server operators had different appetites for
ongoing governance work.264 It was a classic collective action problem:

innocence”); Cancel FAQ, Part 2/4, supra note 252, at IV.E (“Giving out a cancelbot is like
handing out loaded guns with no safeties.”).
  257. See Cancel FAQ, Part 2/4, supra note 252, at V.D (discussing spammers who used
cancels as an offensive weapon, such as “Krazy Kevin,” who “cancelled many posts on
news.admin.net-abuse.misc concerning his spams” and “Crusader,” who tried to prevent
investigation of a neo-Nazi mass email by cancelling Usenet messages discussing it).
  258. See Post, supra note 212, at 163 n.54, 166 (discussing pseudonymity of
Cancelmoose[tm]); Internet FAQ Archives, Net Abuse FAQ, § 2.9, http://www.faqs.org/
faqs/net-abuse-faq/part1/ (last visited Apr. 10, 2010)) (“Cancelmoose[tm] is, to misquote
some wise poster, ‘the greatest public servant the net has seen in quite some time.’ Once
upon a time, the ‘Moose would send out spam-cancels and then post notice anonymously to
news.admin.policy, news.admin.misc, and alt.current-events.net-abuse. The ‘Moose stepped
to the fore on its own initiative, at a time (mid 1994) when spam-cancels were irregular and
disorganized, and behaved altogether admirably—fair, even-handed, and quick to respond to
comments and criticism, all without self-aggrandizement or martyrdom. . . . Nobody knows
who Cancelmoose[tm] really is, and there aren’t even any good rumors.”).
  259. See Julian Dibbell, Griefer Madness, WIRED, Feb. 2008, at 90, 92 (defining “griefer”
as “an online version of the spoilsport—someone who takes pleasure in shattering the world
of play itself”).
  260. See Cancel FAQ, Part 2/4, supra note 252, at V.D (listing “rogue cancellers of
various skill, competence, and intelligence”). Notable examples include Ellisd, who tried on
moral grounds to cancel all messages posted to alt.sex, and the so-called CancelBunny,
which tried to cancel posts containing the scriptures of Scientology. Id.
  261. Froomkin, supra note 212, at 829.
  262. See id. at 828–31 (discussing attempt by “Internet vigilantes” to coordinate their
efforts through the news.admin.net-abuse newsgroup and impose collective punishments on
servers deemed to be excessively spam-friendly and discussing debates over legitimacy and
existence of consensus to act against particular spammers).
  263. See id. at 823–25 (discussing difficulty of coordinating process of selecting which
newsgroups to carry); Hardy, supra note 219 (discussing how dissatisfaction with decisions
by administrators of “backbone cabal” systems not to carry newsgroups discussing sex or
drugs, leading to creation of alternative hierarchy for distribution of news and abdication of
previous coordinators of newsgroup-creation process); Lee S. Bumgarner, The Great
Renaming FAQ: Part 4, http://www.linux.it/~md/usenet/gr4.htm (last visited Apr. 10, 2010)
(discussing near “constitutional crisis” on Usenet, including forged votes, over whether to
create a newsgroup devoted to discussion of aquaria); Giganews, 1987: The Great
Renaming—Page 2, http://www.giganews.com/usenet-history/renaming-2.html (last visited
Apr. 10, 2010) (discussing controversy over Great Renaming and suspicion of the
administrators who pushed it through).
  264. See Froomkin, supra note 212, at 823 (noting that “backbone cabal” systems carried
disproportionate share of Usenet traffic); id. at 824–25 (observing that “a large number,
perhaps a majority, of sites had effectively delegated administration of the newsgroup
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2010]               THE INTERNET IS A SEMICOMMONS                                     2839

redesigning Usenet’s technical protocols would have required widespread
user and server-owner agreement—but that would also have meant giving
up some of the private control and commons freedom that these groups
prized about Usenet.
   In the end, Usenet’s distributed openness left it vulnerable to exactly the
pressures Smith identifies: griefers used the commons to strategically
target private users, and spammers used the commons without heeding the
effects on private infrastructure.265 Usenet itself is not dead. One can still
go to Google Groups or Giganews and participate in ongoing conversations
in groups with strong norms that allowed them to weather the storm. But it
has nowhere near the relative importance that it once did to the life of the
Internet.     ISPs are gradually dropping their support for Usenet
newsgroups,266 and it seems unlikely that the system will ever meaningfully
rise again.267
                3. Why E-mail Succeeded Where Usenet Failed
   This diagnosis—bad boundary-setting—is specific to Usenet. The entire
Internet suffers from spam268—any sufficiently advanced technology is
indistinguishable from a spam vector.269 Spam is the most characteristic
form of strategic behavior in the Internet semicommons; a commons use
that imposes serious costs both on commons and private use.270 But other
applications have managed to cope with the spam problem—because their
boundaries are drawn in ways that permit more effective monitoring and
enforcement.
   Contrast Usenet, for example, with the UGC sites described in the
previous section. As detailed above, these sites align commons community
with private server infrastructure, giving them governance and exclusion

creation process to one person” who was willing to “take the time to figure out what is a
legitimate group . . . and what is a practical joke”).
   265. See, e.g., Sascha Segan, R.I.P. Usenet, PC MAG.COM, July 31, 2008,
http://www.pcmag.com/article2/0,2817,2326849,00.asp (“[S]ervice providers sensibly
started to wonder why they should be reserving big chunks of their own disk space for
pirated movies and repetitive porn.”).
   266. See, e.g., Declan McCullagh, N.Y. Attorney General Forces ISPs To Curb Usenet
Access, CNET NEWS, June 10, 2008, http://news.cnet.com/8301-13578_3-9964895-38.html.
   267. See Posting of Kevin Poulsen to Epicenter, http://www.wired.com/epicenter/
2009/10/usenet/ (Oct. 7, 2009, 12:34 PST).
   268. Clay Shirky has written that “[s]ocial software is stuff that gets spammed.” Posting
of Clay Shirky to Many2Many, http://many.corante.com/archives/2005/02/01/
tags_run_amok.php (Feb. 1, 2005).
   269. As of this writing, Wikipedia discusses “e-mail spam . . . instant messaging spam,
Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, online
classified ads spam, mobile phone messaging spam, Internet forum spam, junk fax
transmissions, social networking spam, and file sharing network spam.” Spam (electronic),
Wikipedia, http://en.wikipedia.org/wiki/Spamming (last visited Apr. 10, 2010). Just about
anything worth using online is spammed.
   270. See Chris Kanich et al., Spamalytics: An Empirical Analysis of Spam Marketing
Conversion, 52 COMM. ACM 99 (2009).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                    4/29/2010 11:26 PM




2840                        FORDHAM LAW REVIEW                                   [Vol. 78

advantages that newsgroups lacked.271 But a UGC site configured as a
discussion board behaves—from a user perspective—almost exactly like a
newsgroup.272 Together with blogs and other similar social software, these
Web-based discussion boards have taken over many of the community-
forum roles that Usenet newsgroups previously played. Now that dispersed
servers are no longer technically necessary—as they were in the days before
the Internet273—the dedicated website is a superior institutional form from a
semicommons perspective.
   Or contrast Usenet with e-mail. While e-mail spam is certainly a serious
and costly problem, e-mail has nonetheless been one of the Internet’s great
success stories. This success is all the more remarkable, given that a decade
and a half ago, e-mail and Usenet looked very similar.274 They were started
within a few years of each other, and they’re both text-based, Internet-wide
communications systems that allow users to communicate with each other
through a peer-to-peer process of message exchange.275 And yet the e-mail
of 2010 is essential to the Internet as we know it and is used by almost
everyone; the Usenet of 2010 is an archaic survival used by small groups of
enthusiasts.
   What happened? E-mail got its property boundaries right. Usenet was
created with the expectation that users throughout the network would share
the same newsgroups; its design works to coordinate everyone’s
experiences.276 By contrast, e-mail cares only about delivery: getting a
particular message to a particular recipient.277 There are no e-mail
equivalents to newsgroups—coordinated entities that all users see in a
substantially identical form. Each e-mail server is a dedicated piece of
infrastructure designed to enable incoming and outgoing e-mail for its own



   271. See supra Part IV.B.
   272. See Usenet, Wikipedia, http://en.wikipedia.org/wiki/Usenet (last visited Apr. 10,
2010) (“Usenet . . . is the precursor to the various Internet forums that are widely used
today . . . .”).
   273. See Giganews, Usenet Newsgroups History, http://www.giganews.com/usenet-
history/index.html (last visited Apr. 10, 2010) (describing switch from UUCP to NNTP as
being designed to take advantage of “cutting-edge networking concepts” including the
“always-on” Internet). Without the always-on Internet, unless most users are willing to pay
long-distance phone charges to connect, they need to have servers located near them.
   274. Krol, writing in 1994, thought that e-mail and Usenet each deserved a chapter.
Indeed, he gave the Web roughly the same amount of space he gave to Usenet. KROL, supra
note 212, at 101–48 (e-mail); id. at 151–87 (Usenet); id. at 287–322 (Web).
   275. Usenet was born in 1979, WALDROP, supra note 178, at 427–28, modern SMTP-
based e-mail in 1983, id. at 465.
   276. See, e.g., MARK R. HORTON, RFC 850, STANDARD FOR INTERCHANGE OF USENET
MESSAGES § 3.4 (1983), http://tools.ietf.org/html/rfc850 (“This message removes a
newsgroup with the given name. . . . [T]he newsgroup is removed from every site on the
network . . . .”).
   277. See, e.g., RFC 821, supra note 145, § 3.2 (discussing forwarding of message by
intermediate relays, with no expectation that they will retain copies for themselves or
transmit to other, unspecified recipients).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                      4/29/2010 11:26 PM




2010]                THE INTERNET IS A SEMICOMMONS                                      2841

users.278 This difference means that e-mail servers have defensible borders.
I can install a spam filter without disrupting any e-mail except that to and
from users on my piece of the network.279 That lets me experiment with
local anti-spam policies without needing anyone else’s permission or
cooperation.280 Users, in turn, can choose e-mail providers based on the
quality of their spam filtering.281
   In a very important sense, e-mail is less ambitious than Usenet. E-mail
may be a common protocol open to everyone, and most e-mail servers may
be “common” in the sense that anyone can send a mail to users on them, but
e-mail itself is deeply nonpublic. A great e-mail message can only be
widely shared through successive forwarding. Some people who might
have benefited from it won’t ever be on the cc: lists. That’s a loss to the
commons, and it’s a reason that e-mail coexists with all sorts of systems
designed to offer more community, like mailing lists and the discussion
boards we’ve already met. But the price we pay for needing to turn
elsewhere for a fuller commons is that e-mail actually works.

                                    V. CONCLUSION
   In the scholarly debates over the significance of the Internet, the private-
versus-common dichotomy looms large. Triumphalists proclaim that the
Internet creates new forms of collaboration and that the commons is the
way of the future. Skeptics respond that the stability and sustainability of
the Internet depend on private ownership. These are the Comedic and
Tragic stories, and they animate scholarly controversies in
telecommunications, intellectual property, privacy, intermediary regulation,
virtual worlds, and almost every other corner of Internet law.
   In addition to its analytical virtues in explaining why some Internet
systems thrive and others fail, semicommons theory also speaks to these
debates. It reminds us not to take the seeming schism between “private”
and “common” too seriously. The greatest commons the world has ever
seen is built out of private property; the highest, best, and most profitable


   278. See generally BRYAN COSTALES & ERIC ALLMAN, SENDMAIL (3d ed. 2002)
(describing configuration of e-mail servers).
   279. See Froomkin, supra note 212, at 833 (“[W]e are exercising our right to refuse traffic
from anyone we choose. We choose not to accept any traffic at all from networks who are
friendly in any way to spammers. This is our right as it would be within anyone’s rights to
make the same choice (or a different one, so long as only their own resources were affected
by their choice).” (emphasis added) (quoting Paul Vixie, What Is an Open Relay?,
http://www.mail-archive.com/imail_forum@list.ipswitch.com/msg11527.html (last visited
Apr. 10, 2010))).
   280. See, e.g., PAUL GRAHAM, HACKERS AND PAINTERS: BIG IDEAS FROM THE COMPUTER
AGE 121–29 (2004) (proposing Bayesian filtering and explaining that the author had already
implemented such a system on his own).
   281. See Gmail Uses Google’s Innovative Technology To Keep Spam Out of Your Inbox,
http://mail.google.com/mail/help/fightspam/spamexplained.html (last visited Apr. 10, 2010)
(using high-quality spam filtering as advertisement for Gmail).
GRIMMELMANN_10_04_29_APPROVED_PAGINATED                                4/29/2010 11:26 PM




2842                        FORDHAM LAW REVIEW                               [Vol. 78

use of that property is to create a commons. Private and common need each
other, and we need them both on the Internet. Our task is not to choose
between them but to find ways to make them work well together.282




  282. Cf. LAWRENCE LESSIG, REMIX: MAKING ART AND COMMERCE THRIVE IN THE HYBRID
ECONOMY (2008) (arguing for coexistence of and collaboration between private and common
forms in the law of intellectual property); Heverly, supra note 141, at 1184–85.