Plaintext
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
Copyright for Literate Robots
James Grimmelmann*
I. INTRODUCTION ............................................................................. 657
II. HUMAN COPYRIGHT ...................................................................... 658
III. ROBOTIC COPYRIGHT .................................................................... 661
A. NON-EXPRESSIVE READING ....................................................... 661
B. BULK READING........................................................................ 665
C. BEYOND FAIR USE .................................................................... 668
IV. POSTHUMAN COPYRIGHT .............................................................. 674
V. CONCLUSION ................................................................................ 681
I. INTRODUCTION
Almost by accident, copyright law has concluded that it is for humans
only: reading performed by computers doesn’t count as infringement.
Conceptually, this makes sense: Copyright’s ideal of romantic readership
involves humans writing for other humans. But in an age when more and
more manipulation of copyrighted works is carried out by automated
processes, this split between human reading (infringement) and robotic
reading (exempt) has odd consequences: it pulls us toward a copyright system
in which humans occupy a surprisingly peripheral place. This Article
describes the shifts in fair use law that brought us here and reflects on the role
of robots in copyright’s cosmology.
* Professor of Law, University of Maryland Francis King Carey School of Law. My thanks
to Aislinn Black, Annemarie Bridy, Jake Linford, Fred von Lohmann, Tom Rubin, Matthew Sag,
Evan Selinger, and the participants in the 2015 Works in Progress Intellectual Property
Colloquium for their suggestions. After January 1, 2016, this Article is available for reuse under
the Creative Commons Attribution 4.0 International license, https://creativecommons.org/
licenses/by/4.0.
657
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
658 IOWA LAW REVIEW [Vol. 101:657
II. HUMAN COPYRIGHT
Quietly, invisibly almost by accident, copyright has concluded that
reading by robots doesn’t count. Infringement is for humans only; when
computers do it, it’s fair use. This is an article about how it happened and
some of the implications.
To understand robotic readership, we should start by talking about
human authorship,1 or more specifically, the ideal of “romantic” authorship.2
The name is not to suggest that there is something swoon-inducing about
picking up a pen, but rather that the sort of creativity copyright concerns itself
with is the product of a specific human mind. To quote a famous passage from
Justice Holmes, “The copy is the personal reaction of an individual upon
nature. Personality always contains something unique. It expresses its
singularity even in handwriting, and a very modest grade of art has in it
something irreducible, which is one man’s alone. That something he may
copyright . . . .”3
Human readership, on this view, is engagement with an author’s
expression. Copyright insists, for example, that substantial similarity for
infringement purposes is a matter of readers’ perceptions of works, rather
than inhering in the works themselves.4 To quote an equally famous passage
from Judge Learned Hand, a defendant’s work infringes on the plaintiff’s if
“the ordinary observer, unless he set out to detect the disparities, would be
disposed to overlook them, and regard their aesthetic appeal as the same.”5
In an important sense, copyright embraces an ideal of romantic readership
that is the dual of romantic authorship. What readers are deemed to care
about in a work of authorship as a copyrightable work—what makes it valuable
1. I will use “reading” generically to refer to the whole range of ways in which one can
experience a work: reading, listening, watching, glancing, observing from all angles, and so on.
For reasons that will become clear, textual works are at the heart of the transformation this Article
traces. I will also use “robot” to refer to computer programs as well as mechanical devices; that
usage fight has already been lost.
2. See, e.g., MARK ROSE, AUTHORS AND OWNERS: THE INVENTION OF COPYRIGHT (1993)
(discussing the rise of authorship and literary property in England); James Boyle, A Theory of Law
and Information: Copyright, Spleens, Blackmail, and Insider Trading, 80 CALIF. L. REV. 1413, 1467–70
(1992) (discussing the role of “originality” in American copyright law); Peter Jaszi, Toward a
Theory of Copyright: The Metamorphoses of “Authorship,” 1991 DUKE L.J. 455 (discussing use of
concept in contemporary copyright law); Martha Woodmansee, The Genius and the Copyright:
Economic and Legal Conditions of the Emergence of the “Author,” 17 EIGHTEENTH-CENTURY STUD. 425
(1984) (tracing the historical emergence of the romantic author ideal in Germany).
3. Bleistein v. Donaldson Lithographing Co., 188 U.S. 239, 250 (1903).
4. See Swirsky v. Carey, 376 F.3d 841, 845 (9th Cir. 2004) (explaining that “subjective
intrinsic test” of similarity “must be left to the jury”); Jeanne C. Fromer & Mark A. Lemley, The
Audience in Intellectual Property Infringement, 112 MICH. L. REV. 1251, 1267–73 (2014).
5. Peter Pan Fabrics, Inc. v. Martin Weiner Corp., 274 F.2d 487, 489 (2d Cir. 1960).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 659
to them as copyright’s ideal readers—is the author’s originality.6 Hand’s
“aesthetic appeal” to readers is Holmes’s author’s “personal reaction of an
individual upon nature.”7 This is why similarities to the unoriginal portions of
a plaintiff’s work cannot support an infringement action, even if they are what
make the work distinctive or drive its sales.8 Copyright’s romantic readers are
drawn to a work because something of the author’s unique humanity (as
expressed in the work) resonates with their own.
In a world of books and other pre-digital technologies, “copyright . . . left
reading, listening, and viewing unconstrained.”9 Ordinary acts of reading did
not result in any new copies, and hence did not trigger any of the copyright
owner’s exclusive rights; nor did readers have access to technologies that
would have made copying easy.10 The boundary between authors and readers
was clear and simple: Authors made copies regulated by the copyright system,
while readers did not make copies and existed outside its formal bounds.
Modern media technologies from the VCR onwards have made reader
copying much easier, and digital media technologies often make copies as
part of the ordinary reading or playback process.11 The result is that readers
now regularly attract copyright’s attention; fair use has stepped in to ensure
that ordinary acts of reading remain noninfringing.12
Now for the third participant in copyright’s eternal triangle. “One who
has slavishly or mechanically copied from others may not claim to be an
author.”13 We have another name for a “slavish copyist”: an infringer. Authors
create; readers read; copyists infringe. But this is not quite all, because the
line between infringer and author is contestable. It is one thing to say that a
pirate printer reaps where she has not sown, but what about the writer of a
critical review? She is both a copyist and a creator. Whenever copyright can
recognize in a copyist the same attributes it admires in authors, it resolves this
6. See Arnstein v. Porter, 154 F.2d 464, 473 (2d Cir. 1946) (phrasing the infringement test
as “whether defendant took from plaintiff’s works . . . what is pleasing to the ears of lay listeners,
who comprise the audience for whom such popular music is composed”).
7. See Bleistein, 188 U.S. at 250.
8. See, e.g., Kohus v. Mariol, 328 F.3d 848, 855 (6th Cir. 2003) (stating that a court in an
infringement case must “filter out the unoriginal, unprotectible [sic] elements” of the plaintiff’s
works before assessing similarity).
9. Jessica Litman, Lawful Personal Use, 85 TEX. L. REV. 1871, 1882 (2007).
10. See 17 U.S.C. § 106 (2012) (listing exclusive rights, with “reading” conspicuously absent).
11. See generally Aaron Perzanowski, Fixing RAM Copies, 104 NW. U. L. REV. 1067 (2010).
12. See Litman, supra note 9, at 1897–903; Aaron Perzanowski & Jason Schultz, Copyright
Exhaustion and the Personal Use Dilemma, 96 MINN. L. REV. 2067 (2012). The leading case is Sony
Corp. of America v. Universal City Studios, Inc., 464 U.S. 417 (1984). See also Fox Broad. Co. v. Dish
Network L.L.C., 747 F.3d 1060 (9th Cir. 2013); Recording Indus. Ass’n of Am. v. Diamond
Multimedia Sys., Inc., 180 F.3d 1072 (9th Cir. 1999).
13. L. Batlin & Son, Inc. v. Snyder, 536 F.2d 486, 490 (2d Cir. 1976) (quoting 1 MELVILLE
B. NIMMER, NIMMER ON COPYRIGHT § 6, at 10.2 (1975)).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
660 IOWA LAW REVIEW [Vol. 101:657
tension in her favor by means of fair use.14 After a few detours along the way,
the courts have settled on asking whether the defendant’s use is
“transformative” of the plaintiff’s expression.15 In the words of Judge Pierre
Leval, who articulated the concept:
The use must be productive and must employ the quoted matter in
a different manner or for a different purpose from the original. . . .
If . . . the secondary use adds value to the original—if the quoted
matter is used as raw material, transformed in the creation of new
information, new aesthetics, new insights and understandings—this
is the very type of activity that the fair use doctrine intends to protect
for the enrichment of society.16
Fair use in this vein turns on whether the defendant’s use qualifies her as
an author in her own right, one who stands on her own creative feet in crafting
a work whose appeal to audiences derives from her own expression, rather
than from the expression of the pre-existing materials she has recast and
adapted.
This is the traditional shape of copyright: it protects humans writing for
humans. Transformative fair users are simultaneously readers and authors;
human authorship is ultimately about human readership. Some author-
focused accounts of copyright downplay the reader’s agency in this
engagement: she is treated “as a passive consumer of copyrighted works as
entertainment commodities . . . [who is] no different from the consumer of
any other good.”17 But other accounts recognize that readers actively engage
with works: they choose what, when, and how to read; they communicate with
others with and about works; and they express themselves using works in ways
that fall short of full authorship in the transformative-use sense.18 Scholars
have described the richness of readers’ experiences, emphasizing the ways in
which reading is a human activity: it engages our facilities as thinking, feeling,
embodied beings, and it is crucial to our development as fully realized and
socially embedded individuals.19 These scholars agree that engagement with
14. See, e.g., ABRAHAM DRASSINOWER, WHAT’S WRONG WITH COPYING? 78 (2015) (“[T]he
defense is not about undoing or overlooking a wrong for reasons extraneous to authorship
itself. . . . It is as if, upon hearing the plaintiff’s complaint, the defendant were to say: ‘. . . I am
equally an author.’”).
15. Authors Guild v. Google, Inc. (Authors Guild II), 804 F.3d 202, 214–15 (2d. Cir. 2015).
16. Pierre N. Leval, Commentary, Toward a Fair Use Standard, 103 HARV. L. REV. 1105, 1111
(1990).
17. Joseph P. Liu, Copyright Law’s Theory of the Consumer, 44 B.C. L. REV. 397, 402 (2003)
(describing this as “the couch potato view” of readership).
18. Id. at 406–20.
19. See, e.g., id.; Julie E. Cohen, The Place of the User in Copyright Law, 74 FORDHAM L. REV. 347
(2005); Jessica Litman, Creative Reading, 70 L. & CONTEMP. PROBS., Spring 2007, at 175; Jessica
Litman, Readers’ Copyright, 58 J. COPYRIGHT SOC’Y U.S.A. 325 (2011); Rebecca Tushnet, Copy This
Essay: How Fair Use Doctrine Harms Free Speech and How Copying Serves It, 114 YALE L.J. 535 (2004).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 661
expression is the core of copyright; the difference is that they describe how
copyright law can stand in the way of this engagement rather than promoting
it.20 Copyright’s friends, enemies, and frenemies alike tell a story about
expressive reading.21
III. ROBOTIC COPYRIGHT
Digital technologies challenge this story in two respects. Qualitatively,
they make it possible to use works in new ways; quantitatively, they make it
possible to use works on a much greater scale. I would like to trace these two
trends, and especially their intersection. When you combine nonexpressive
uses and bulk copying,22 you obtain a form of reading that can only be carried
out by robots. The idea of transformative fair use has itself been transformed
to deal with such reading.
A. NON-EXPRESSIVE READING
Our point of departure is Sega v. Accolade.23 Accolade was a videogame
publisher; it wanted to sell versions of its games that would run on a Sega
Genesis console.24 Rather than pay a licensing fee to Sega, Accolade took
three of Sega’s games and reverse engineered them to understand the
technical details of how they communicated with the Genesis.25 This process
necessarily involved copying and analyzing large sections of the Sega games’
software, but at the end of the process, Accolade’s actual games included only
trivially tiny excerpts from Sega’s.26
Accolade’s practice poses two challenges for a strict transformative fair
use analysis. Accolade’s games did not comment on or modify the expression
in Sega’s games in any meaningful sense, while its reverse engineering process
involved extensive literal copying. Accolade thus made two uses, neither of
which was a clear fit for transformative fair use. The games were too far
removed from Sega’s; the reverse engineering copies were too close.
20. See Yochai Benkler, Free as the Air to Common Use: First Amendment Constraints on Enclosure
of the Public Domain, 74 N.Y.U. L. REV. 354, 354–57 (1999).
21. Professor Sag offers a useful distinction between “expressive” and “nonexpressive” uses.
See Matthew Sag, Copyright and Copy-Reliant Technology, 103 NW. U. L. REV. 1607, 1624–28 (2009);
see also Maurizio Borghi & Stavroula Karapapa, Non-Display Uses of Copyright Works: Google Books and
Beyond, 1 QUEEN MARY J. INTELL. PROP. 21, 23, 43–44 (2011) (defining category of “non-display
uses” and distinguishing “uses on works” from “uses of works”).
22. See Matthew Sag, Orphan Works as Grist for the Data Mill, 27 BERKELEY TECH. L.J. 1503,
1548–49 (2012) (describing how bulk copying presents copyright issues).
23. Sega Enters. v. Accolade, Inc., 977 F.2d 1510 (9th Cir. 1992).
24. Id. at 1514–15.
25. Id.
26. Id. at 1515–16.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
662 IOWA LAW REVIEW [Vol. 101:657
The court’s response was clear and sensible. Accolade’s games had no
need of fair use in the first place,27 while the reverse engineering was a form
of “intermediate copying” protected by fair use.28 The copying was “the only
way to gain access to the ideas and functional elements embodied in a
copyrighted computer program.”29 Accolade was like a book critic who starts
by photocopying pages to spread on her floor as she annotates their
inconsistencies and hypocrisies. Intermediate copying for a wholly
noninfringing purpose is as permissible as intermediate copying for a
transformatively fair one.
It is easy to see how Accolade fits cleanly into the conception of a creator
rather than a slavish copyist.30 The court’s reasoning, however, also says
something about Accolade as a reader. Accolade’s employees studied the Sega
games closely, but not in the way a consumer would, by playing them for
entertainment. Thus, Accolade was not using Sega’s games for their protected
expressive content, but simply to extract some unprotected, functional, non-
expressive information contained within them. The human audience at the
end of the line—Accolade’s customers—never received access to Sega’s
expression.
This is a lot to say about video games four generations out of date. But
the conceptual twist in Sega v. Accolade is crucial, because it stands for the
principle that non-expressive reading does not count as infringement.31 That
principle is much broader than software; it applies whenever there is
something to be learned about a copyrighted work other than its expressive
authorship.32 And that, as we will see, is all the time.
27. Id. at 1523–24 (assuming in passing that Accolade’s games were “not substantially
similar” to Sega’s—and thus by implication were noninfringing).
28. Id. at 1521–28.
29. Id. at 1527.
30. See id. at 1523 (describing Accolade’s entry as motivated by a desire to become “a
legitimate competitor in the field of Genesis-compatible video games”).
31. See Sag, supra note 21, at 1639 (arguing for a “general principle of nonexpressive use”
under which “acts of copying which do not communicate the author’s original expression to the
public should not be held to constitute copyright infringement”).
32. It is also, in some respects, a narrower principle than intermediate copying. Consider
Fox Broadcasting Co. v. Dish Network L.L.C., where the defendant sold digital video recorders
capable of automatically skipping commercial breaks and also made “quality assurance” copies of
television programs, used only internally at its own facilities, to ensure that the commercial-
skipping feature worked properly in consumers’ homes. Fox Broad. Co. v. Dish Network L.C.C.,
905 F. Supp. 2d 1088, 1094–96 (C.D. Cal. 2012), aff’d, 747 F.3d 1060 (9th Cir. 2013). The
commercial skipping feature was fair use, so the logic of intermediate copying would have said
that the quality assurance copies were too. Id. at 1106. But the court held that they were not:
They were commercial and non-transformative, and they threatened the market for the television
programs. Id. at 1104–06. Under the logic of non-expressive use, this result is easier to justify:
The defendant’s employees actually viewed the quality assurance copies, and they were used as
part of a system helping consumers make expressive uses as well.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 663
The paradigm cases of transformative fair use involve partial or nonliteral
copying: Portions of the old work are melted down and mixed with new
elements to make authorship alloys.33 But another line of cases from the
familiar world of humans writing for humans shows that even verbatim uses
can be transformative—in Leval’s terminology, the transformation consists of
a “different purpose” rather than a “different manner.”34 The work is given to
readers in essentially the same form, but for a very different reason than the
one for which the work was created. It may be necessary to reproduce a work
to prove that it exists, as in Núñez v. Caribbean International News Corp.35 There,
in reporting on a scandal involving nearly nude photographs of a beauty-
pageant winner, a newspaper ran several of the photographs alongside its
news articles.36 Held, fair use because “the pictures were shown not just to
titillate, but also to inform.”37 Or, as in Bill Graham Archives v. Dorling Kindersley
Ltd., the defendant may recontextualize a work by surrounding it with her
own expression.38 There, a publisher used reduced-size images of seven
Grateful Dead concert posters as part of a 480-page coffee-table book in the
form of a timeline.39 Held, the use of the images “as historical artifacts
graphically representing the fact of significant Grateful Dead concert
33. See, e.g., Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 583 (1994) (holding that 2
Live Crew’s filthy rap version of the Roy Orbison song “Oh, Pretty Woman” was a parody and
hence potentially transformative fair use).
34. See Leval, supra note 16, at 1111; R. Anthony Reese, Transformativeness and the Derivative
Work Right, 31 COLUM. J.L. & ARTS 467, 485 (2008) (“Though transformativeness for fair use
analysis could involve both the purpose for which the defendant is using the copyrighted work
and the alterations that the defendant has made to that work’s content, the circuit court cases
suggest that it is the former, rather than the latter, that really matters.”). It is not obvious that all
of these cases should be categorized as “transformative uses” under the first factor rather than
harmless noncompeting uses under the fourth factor, but following Leval they have been. See
Infinity Broad. Corp. v. Kirkwood, 150 F.3d 104, 108 (2d Cir. 1998) (protesting, against the tide,
that “difference in purpose is not quite the same thing as transformation” under Campbell); Neil
Weinstock Netanel, Making Sense of Fair Use, 15 LEWIS & CLARK L. REV. 715, 734–46 (2011)
(tracing increasing dominance of first-factor “transformative use” paradigm over fourth-factor
“market-centered” paradigm).
35. See Núñez v. Caribbean Int’l News Corp., 235 F.3d 18 (1st Cir. 2000). But see Monge v.
Maya Magazines, Inc., 688 F.3d 1164, 1176 (9th Cir. 2012) (finding that a gossip magazine’s
publication of photographs of a celebrity couple’s secret wedding “did not transform the photos
into a new work . . . or incorporate the photos as part of a broader work”). Monge distinguished
Núñez on the basis that the controversy there concerned “the salacious photos themselves.” Monge,
688 F.3d at 1175. The newsworthiness fair use cases tend to be factually intensive. Compare, e.g.,
L.A. News Serv. v. Reuters Television Int’l, 149 F.3d 987 (9th Cir. 1998) (holding that a
rebroadcast of a video clip of beating of Reginald Denny during the 1992 Los Angeles riots was
not fair use), and L.A. News Serv. v. KCAL-TV Channel 9, 108 F.3d 1119 (9th Cir. 1997) (same),
with L.A. News Serv. v. CBS Broad., Inc., 313 F.3d 1093 (9th Cir. 2002) (holding that a shorter
use of the same clip was fair use).
36. Núñez, 235 F.3d at 21.
37. Id. at 22.
38. See generally Bill Graham Archives v. Dorling Kindersley Ltd., 448 F.3d 605 (2d Cir. 2006).
39. Id. at 607.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
664 IOWA LAW REVIEW [Vol. 101:657
events . . . fulfill[ed] [the defendant’s] transformative purpose of enhancing
the biographical information” in the book.40 Again, these cases easily fit the
model of the transformative fair user as an author engaged in the process of
creating “new information, new aesthetics, new insights and
understandings.”41
Combine these different-purpose cases with Sega’s idea of intermediate
copying for nonexpressive uses and you end up with a powerful new principle.
Verbatim copying of a complete work will be protected as fair use if the copy
is used solely as input to a process that does not itself use the works
expressively. Or, to put it a little more provocatively, nonexpressive uses do
not count as reading.42 They are not part of the market that copyright cares
about, because the author’s market consists only of readers.43
A string of recent cases, for example, deal with the reproduction of
journal articles that are prior art for patent applications.44 The law firms
preparing those applications have generally succeeded in arguing that their
reproductions are fair use. Courts easily find that complying with the legal
obligation to attach relevant prior art is a different purpose.45 But in denying
that the law firms and the Patent Office are part of the audience the
publishers intended to reach, the courts use language that starts to deny that
they are audiences at all. One court explained that “[the law firm’s] use of the
Articles is narrower than, and indifferent to, their manner of expression.”46
Another said that when an applicant submits prior art to the Patent Office, it
“is transformed from an item of expressive content to evidence of the facts
40. Id. at 610.
41. Leval, supra note 16, at 1111.
42. See DRASSINOWER, supra note 14, at 87 (“[B]ecause a work is a communicative act, . . .
[u]ses of the work as a mere pattern of ink, so to speak, in the absence of recommunication, are
not uses of the work as a work.”).
43. Cf. Authors Guild, Inc. v. HathiTrust, 755 F.3d 87, 97 (2d Cir. 2014) (“There is no
evidence that the Authors write with the purpose of enabling text searches of their books.”).
44. E.g., Am. Inst. of Physics v. Winstead PC, No. 3:12-CV-1230-M, 2013 WL 6242843 (N.D.
Tex. Dec. 3, 2013); Am. Inst. of Physics v. Schwegman, Lundberg & Woessner, P.A., No. 12-cv-
528 (RHK/JJK), 2013 WL 4666330 (D. Minn. Aug. 30, 2013). Two similar cases failed to reach
the fair use issue. See John Wiley & Sons, Ltd. v. McDonnell Boehnen Hulbert & Berghoff, LLP,
No. 12 C 1446 (N.D. Ill. Mar. 25, 2014) (voluntarily dismissed); John Wiley & Sons, Inc. v. Hovey
Williams LLP, No. 5:2012-cv-4041 (D. Kan. June 22, 2012) (voluntarily dismissed). See generally
D.R. Jones, Law Firm Copying and Fair Use: An Examination of Different Purpose and Fair Use Markets,
56 S. TEX. L. REV. 313 (2014) (discussing role of transformativeness in law firm copying cases).
45. Winstead, 2013 WL 6242843, at *5–6; Schwegman, Lundberg & Woessner, 2013 WL
4666330, at *9–13.
46. Schwegman, Lundberg & Woessner, 2013 WL 4666330, at *12; accord Bond v. Blum, 317
F.3d 385, 397 (4th Cir. 2003) (fair use to copy a manuscript for use in a child-custody
proceeding); Denison v. Larkin, 64 F. Supp. 3d 1127, 1135 (N.D. Ill. 2014) (fair use to copy blog
post for use in an attorney discipline proceeding); Healthcare Advocates, Inc. v. Harding, Earley,
Follmer & Frailey, 497 F. Supp. 2d 627, 642 (E.D. Pa. 2007) (fair use to copy archived webpage
for use in litigation).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 665
within it; the expressive content becomes merely incidental.”47 These cases
speak in terms of transformation of the work, but the work itself changes only
in the eye of the beholder: a different context or a different mode of reading.
To say that a work is no longer “an item of expressive content” is to say that it
is no longer being read expressively.48
B. BULK READING
Now it is time to pick up the other strand of our story: the shift from retail
reading to wholesale. Take the search-engine cases, of which Perfect 10, Inc. v.
Amazon.com is the leading example.49 Google’s image search engine copies
millions of images from across the internet and shows small “thumbnails” of
those images to users in response to search queries.50 This, the court held, was
a transformative fair use, even though the thumbnails were exact replicas of
the full-size images: “Although an image may have been created originally to
serve an entertainment, aesthetic, or informative function, a search engine
transforms the image into a pointer directing a user to a source of
information.”51 Note how search users are understood as readers. Google
gives them access to the plaintiff’s expressive works, but, in the act of using
Google search, they are near-automatons. They follow a “pointer” supplied by
an “electronic reference tool;”52 any aesthetic appreciation is suspended until
they arrive at their destination and admire the full-sized image in its original
context. The court is able to elide the human audience by downplaying its
humanity.53
A similar move is visible in A.V. ex rel. Vanderhye v. iParadigms.54 There,
high school students were required to submit their essays to a plagiarism-
47. Winstead, 2013 WL 6242843, at *5; accord Jartech, Inc. v. Clancy, 666 F.2d 403, 407 (9th
Cir. 1982) (finding it was fair use to copy films “not for subsequent use and enjoyment, but for
evidence to be used in [litigation]”); White v. W. Publ’g Corp., 29 F. Supp. 3d 396, 399–400
(S.D.N.Y. 2014) (holding it was transformative fair use for West and Lexis to make comprehensive
databases of filed legal briefs); Stern v. Does, 978 F. Supp. 2d 1031, 1045 (C.D. Cal. 2011)
(finding it was transformative fair use to forward an email because “[b]y forwarding the post in
e-mails, they conveyed the fact of the post rather than its underlying message”).
48. Cf. DRASSINOWER, supra note 14, at 102 (“The defendant escapes liability not because
her unauthorized use is fair but because it is not a use.”).
49. See generally Perfect 10, Inc. v. Amazon.com, Inc., 508 F.3d 1146 (9th Cir. 2007).
50. Id. at 1165; accord Kelly v. Arriba Soft Corp., 336 F.3d 811, 819 (9th Cir. 2002); Field v.
Google Inc., 412 F. Supp. 2d 1106, 1118 (D. Nev. 2006).
51. Perfect 10, 508 F.3d at 1165.
52. Id.; see also Kelly, 336 F.3d at 819 (describing a search engine’s purpose as “improving
access to information on the internet [rather than] artistic expression”).
53. Field v. Google Inc. is an interesting contrast. It dealt with Google’s cache of archived
webpages, and its fair use analysis emphasizes the interactive, mentally intense research tasks that
users can make using the archived copies, including observing changes in a webpage over time
and “understand[ing] why a page was responsive to their original query.” Field, 412 F. Supp. 2d
at 1118–19.
54. A.V. ex rel. Vanderhye v. iParadigms, LLC, 562 F.3d 630 (4th Cir. 2009).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
666 IOWA LAW REVIEW [Vol. 101:657
detection service, Turnitin, which checked for suspicious similarity to essays
already in the database, and then retained each essay to be checked against
future essays. This, the court held, was a transformative fair use because
Turnitin’s use was “completely unrelated to expressive content.”55 The court
emphasized Turnitin’s use of robotic readers in a sentence that does not stand
up to close reading: “The archived student works are stored as digital code,
and employees of iParadigms do not read or review the archived works.”56 The
first half of this statement is trivially true: any work stored on a computer is
stored as “digital code.” And the second half should be irrelevant: if checking
for plagiarism really is a transformative use, it shouldn’t matter whether the
comparisons are carried out by Turnitin’s computers or its employees.
One of the Google Books cases, Authors Guild v. Google, takes the idea that
bulk reading is not reading even further.57 Google’s database of millions of
scanned books supports a comprehensive search engine. In holding that the
database is a “highly transformative” use, the court adopted Perfect 10’s
“pointer” theory: “Google Books . . . uses snippets of text to act as pointers
directing users to a broad selection of books.”58 The database also enables new
uses in the “digital humanities” such as analyzing trends in word usage over
time.59 But these uses do not count as infringements. To quote the court,
“Google Books does not supersede or supplant books because it is not a tool
to be used to read books.”60 In affirming this holding on appeal, Judge Leval
himself wrote, “What matters in such cases is not so much ‘the amount and
substantiality of the portion used’ in making a copy, but rather the amount
and substantiality of what is thereby made accessible to a public for which it may
serve as a competing substitute.”61
Another strand of the Google Books litigation—against Google’s partner
libraries—reaches the same idea indirectly.62 The authors had argued that the
libraries’ database of digitized books created a security risk that hackers would
break in and copy the books. The court disagreed, describing the risk as
“hypothetical” and “speculative.”63 Note the framing. It was undisputed that
there were at least four different physical instantiations of millions of books.
But those copies did not count because there was no evidence in the record
55. Id. at 640.
56. Id. at 634.
57. See generally Authors Guild, Inc. v. Google, Inc. (Authors Guild I), 954 F. Supp. 2d 282
(S.D.N.Y. 2013).
58. Id. at 291.
59. Id. at 287–88. See generally Brief of Digital Humanities and Law Scholars as Amici Curiae
in Support of Defendants–Appellees and Affirmance, Authors Guild, Inc. v. HathiTrust, 755 F.3d
87 (2d Cir. 2014) (No. 12-4547-cv) (discussing uses in detail).
60. Authors Guild I, 954 F. Supp. 2d at 291.
61. Authors Guild v. Google, Inc. (Authors Guild II), 804 F.3d 202, 222 (2d. Cir. 2015).
62. See HathiTrust, 755 F.3d at 87.
63. Id. at 100–01.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 667
that any humans were likely to read them. To similar effect is the District
Court’s opinion in Cambridge University Press v. Becker, which held that
uploading book excerpts to a university’s electronic reserves site was a
noninfringing de minimis use where no students ever downloaded the
excerpts.64 If a copy falls in the forest and no humans are there to hear it, the
sound is non-infringing. Bulk nonexpressive uses are fair uses.
When we talk about nonexpressive uses, we should perhaps refer to them
by another name: non-human uses. When we as people take part in these uses,
we suspend our human capacities. The now-rejected Google Books settlement
inadvertently captured this idea when it defined (permissible) “Non-
Consumptive Research” as “research in which computational analysis is
performed on one or more Books, but not research in which a researcher
reads or displays substantial portions of a Book to understand the intellectual
content presented within the Book.”65 You can only read this book if you don’t
understand anything in it.
Perhaps you have seen the tension. We have created a two-tracked
copyright law: one for human readers and one for robots. Uses involving
human readers receive close and exacting scrutiny to make sure that no
market belonging to the copyright owner is being preempted. Uses involving
robotic readers are fast-tracked for fair use.
A pair of recent cases illustrates the difficulties this divergence creates.
Both involve news-monitoring services. Meltwater scrapes news articles from
162,000 websites, indexes them, and delivers alerts to its customers when new
stories appear on particular topics.66 TVEyes does the same for television and
radio news from 1400 stations.67 In both cases, news-media plaintiffs argued
that the services were infringing republishers of copyrighted news stories;
both services defended themselves by arguing that they were search engines.
Both cases turned on fair use; Meltwater’s use was nontransformative and
lost,68 while TVEyes’ use was transformative and won.69 The difference
between Meltwater and TVEyes consists not of their facts—which, while in
theory distinguishable, are in truth uncomfortably close—but in the different
64. Cambridge Univ. Press v. Becker, 863 F. Supp. 2d 1190, 1245–53, 1265, 1298, 1314,
1337 (N.D. Ga. 2012), rev’d on other grounds sub nom. Cambridge Univ. Press v. Patton, 769 F.3d
1232 (11th Cir. 2014). An instructive contrast is Ringgold v. Black Entertainment Television, Inc.,
which held that an out-of-focus poster visible in the background of a sitcom episode for a total of
26.75 seconds was not a de minimis use. The poster was visible only fleetingly, but it was still
visible to the human audience—and that makes all the difference. See Ringgold v. Black
Entertainment Television, Inc., 126 F.3d 70, 76–77 (2d Cir. 1997).
65. Amended Settlement Agreement § 1.93, Authors Guild v. Google Inc., 770 F. Supp. 2d
666 (S.D.N.Y. 2011) (No. 05 Civ. 8136(DC)).
66. Associated Press v. Meltwater U.S. Holdings, Inc., 931 F. Supp. 2d 537, 543–44 (S.D.N.Y.
2013).
67. Fox News Network, LLC v. TVEyes, Inc., 43 F. Supp. 3d 379, 383 (S.D.N.Y. 2014).
68. Meltwater, 931 F. Supp. 2d at 552.
69. TVEyes, 43 F. Supp. 3d at 400.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
668 IOWA LAW REVIEW [Vol. 101:657
way they conceptualize what these aggregation services do. In Meltwater, Judge
Cote sees Meltwater as a service for human readers; it helps them organize
and optimize their consumption of news.70 In TVEyes, Judge Hellerstein sees
TVEyes as a digital service whose operations are at heart non-human.71
Watching a thousand channels full-time forever is a task that is so far beyond
the capacity of any person that it is simply “different in kind.”72 Take that,
John Henry.
C. BEYOND FAIR USE
I have dwelt at length on transformative fair use, because it seems to me
that here the pattern of denigrating robotic reading is at its clearest and most
dramatic. But something similar is at work in other parts of copyright
doctrine. Activities that copyright forbids to humans escape its notice when
they are carried out by robots.
Consider the history of how copyright has treated works and copies
created to be read by robots. That history goes back surprisingly far, because
the 19th century had copyright-infringing robots too—player pianos. In a
string of cases culminating in 1908’s White-Smith Music Publishing Co. v. Apollo
Co., the courts held that the perforated paper rolls used by player pianos were
not infringing “copies within the meaning of the copyright act.”73 Their
reasoning was explicitly anthropocentric: the piano rolls were “part of a
machine” rather than being “addressed to the eye.”74 Unlike sheet music that
humans can make sense of, the rolls could be read only by robots: they
“[c]onvey[ed] no meaning . . . to the eye of even an expert musician.”75 The
Supreme Court rhetorically asked whether Congress could have meant to
subject mere “instruments” like music box cylinders and phonograph records
to copyright.76
The next year, Congress did just that, because of course player pianos
produce sounds for the human ear even if their rolls are not addressed to the
70. See Meltwater, 931 F. Supp. 2d at 552 (quoting Meltwater marketing materials as saying
“your news is delivered in easy to read morning and/or afternoon reports”).
71. See TVEyes, 43 F. Supp. 3d at 393 (“Thus, without TVEyes, this information cannot
otherwise be gathered and searched. That, in and of itself, makes TVEyes’ purpose
transformative . . . .”); accord White v. W. Publ’g Corp., 29 F. Supp. 3d 396, 399–400 (S.D.N.Y.
2014) (finding that copying legal briefs to create an “interactive legal research tool” was fair use).
72. TVEyes, 43 F. Supp. 3d at 393 (“Meltwater aggregated content already available to the
individual user who was willing to perform enough searches and cull enough results on the
Internet. . . . TVEyes, however, creates a database of otherwise unavailable content. TVEyes is the
only service that creates a database of everything that television channels broadcast, twenty-four
hours a day, seven days a week.”).
73. White Smith Music Publ’g Co. v. Apollo Co., 209 U.S. 1, 18 (1908).
74. Id. at 12 (quoting Kennedy v. McTammany, 33 F. 584, 584 (C.C.D. Mass. 1888)).
75. Id. at 13 (quoting Stern v. Rosey, 17 App. D.C. 562, 565 (D.C. Cir. 1901)).
76. Id. at 17–18. Note the dual meaning of “instrument”—it is both something that
produces music and merely a means for accomplishing a task. Id. at 17.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 669
human eye. The Copyright Act of 1909 extended the copyright owner’s
exclusive rights to “any form of record in which the thought of an author may
be recorded and from which it may be read or reproduced.”77 But the idea
persisted that copies intended to be read by machines were subordinate to
copies intended to be read by humans. The 1909 Act subjected these
“mechanical reproductions” to a statutory compulsory license,78 one that
endures today.79 Even when they directly facilitate human reading, copies for
robots have second-class status in copyright’s ontology.
Something similar happened with computer software. Programs are
written by humans to be read by robots: romantic authorship without
romantic readership. This creates a conceptual barrier to software copyright
over and above the usual debates about authorship and economics.80 Thus, in
dissent from the Commission on New Technological Uses (“CONTU”) report
recommending copyright protection for computer software, commissioner
and novelist John Hersey argued that programs “eventually become an
essential part of the machinery” of a computer,81 are not “intelligible to a
human being,”82 and are “not designed to be read by anyone.”83 He concluded
that software copyright meant “affording copyright protection to a labor-
saving mechanical device.”84 His colleague, the copyright scholar Melville
Nimmer, suggested that “it may prove desirable to limit copyright protection
for software to those computer programs which produce works which
themselves qualify for copyright protection”—that is, to programs which emit
something human audiences would recognize as authorial expression.85
Another line of defense was that only source code—the human-written and
77. Copyright Act of 1909, Pub. L. No. 60–349, § 1(e), 35 Stat. 1075, 1075–76. The 1976
Copyright Act embraced this principle: a work is “fixed” in a “copy” when it “can be perceived,
reproduced, or otherwise communicated, either directly or with the aid of a machine or device.”
17 U.S.C. § 101 (2012). The use of “perceived” shows that the link to human perception remains.
A work is fixed only when humans could ultimately perceive it, even if indirectly.
78. Copyright Act of 1909 § 1(e), 35 Stat. at 1075–76.
79. 17 U.S.C. § 115 (2012). It has been joined by other statutory licenses directed at robots.
See id. § 112 (“ephemeral” copies made by broadcasters); id. § 114(j) (“noninteractive” digital
audio transmissions); id. § 116 (“phonorecord players” such as jukeboxes).
80. The major doctrinal problem for authorship is that much of what goes into a computer
program is heavily influenced or even dictated by functional constraints. On the authorship and
policy questions, compare, for example, Arthur R. Miller, Copyright Protection for Computer Programs,
Databases, and Computer-Generated Works: Is Anything New Since CONTU?, 106 HARV. L. REV. 977,
1059 (1993), with Pamela Samuelson, CONTU Revisited: The Case Against Copyright Protection for
Computer Programs in Machine-Readable Form, 1984 DUKE L.J. 663, 753.
81. NAT’L COMM’N ON NEW TECH. USES OF COPYRIGHTED WORKS, FINAL REPORT OF THE
NATIONAL COMMISSION ON NEW TECHNOLOGICAL USES OF COPYRIGHTED WORKS 28 (1979)
(Hersey, Comm’r, dissenting) (emphasis omitted).
82. Id. at 29.
83. Id. at 30.
84. Id.
85. Id. at 27 (Nimmer, Comm’r, concurring).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
670 IOWA LAW REVIEW [Vol. 101:657
human-intelligible texts written by programmers—should be eligible for
copyright protection, but not the object code actually executed by
computers.86
These categorical arguments against software copyright have not fared
well;87 it is clear today that computer programs are proper copyrightable
subject matter88 and that running a program creates a potentially infringing
copy.89 The debates in the courts instead mostly turn on case-by-case questions
of which specific aspects of a particular program are copyrightable.90 This
might seem to count against the argument that robotic reading is
noninfringing. But consider this: Congress carved out (albeit clumsily) an
exception for copies of computer programs “created as an essential step in
the utilization of the computer program in conjunction with a machine and
. . . used in no other manner.”91 The idea that robot-only copying is different
lives on. More fundamentally, it is these broad rules—programs are
copyrightable, and running a program infringes the reproduction right—that
have made it necessary to invoke fair use as a defense in technological cases.
The story told above about transformative fair use is the story of how the
courts used fair use to shield robotic reading from liability that would
otherwise attach. Exempting robots entirely would have led to the White-Smith
problem: uses indisputably intended for human eyes would escape scrutiny.
The combination of broad infringement and broad fair use draws the line
instead between robot-only and robot-plus-human uses.
Copyright embraces the rule that robotic reading does not count in many
other contexts, as well. Here are a few.
Volitional Conduct: “[S]omething more must be shown than mere
ownership of a machine used by others to make illegal copies” to hold a
86. See, e.g., Williams Elecs., Inc. v. Artic Int’l, Inc., 685 F.2d 870, 876–77 (3d Cir. 1982)
(rejecting defendant’s theory “that a ‘copy’ must be intelligible to human beings”).
87. The Commission’s report rejected Hersey and Nimmer’s misgivings, reasoning that a
computer program in computer memory “still exists in a form from which a human-readable
version may be produced” regardless of what the program does. NAT’L COMM’N ON NEW TECH.
USES OF COPYRIGHTED WORKS, supra note 81, at 22. Even for the Commission at its most
expansive, man was still the measure of all things.
88. See Apple Comput. Inc. v. Franklin Comput. Corp., 714 F.2d 1240, 1248 (3d Cir. 1983)
(rejecting the argument “that copyrightability depends on a communicative function to
individuals”); NAT’L COMM’N ON NEW TECH. USES OF COPYRIGHTED WORKS, supra note 81, at 22.
89. MAI Sys. Corp. v. Peak Comput., Inc., 991 F.2d 511, 518 (9th Cir. 1993). But see Cartoon
Network LP v. CSC Holdings, Inc., 536 F.3d 121, 129–30 (2d Cir. 2008) (holding that data stored
in computer memory for 1.2 seconds is not sufficiently “embodied . . . for a period of more than
transitory duration” to infringe).
90. See, e.g., Comput. Assocs. Int’l, Inc. v. Altai, Inc., 982 F.2d 693, 706–11 (2d Cir. 1992)
(giving detailed analytical framework for assessing infringement of software); Oracle Am., Inc. v.
Google, Inc., 872 F. Supp. 2d 974, 977 (N.D. Cal. 2012) (rejecting copyright in software
interfaces), rev’d, 750 F.3d 1339, 1348 (Fed. Cir. 2014) (allowing copyright in those same
software interfaces).
91. 17 U.S.C. § 117(a)(1) (2012).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 671
defendant directly liable as an infringer.92 The defendant must have “some
aspect of volition and meaningful causation—as distinct from passive
ownership and management of an electronic Internet facility.”93 Unlike the
transformative fair use defense, which fully excuses an otherwise-infringing
act, the volitional conduct doctrine is a rule of attribution: It decides which of
several possible defendants should be treated as a direct infringer. But it still
offers strong advantages to defendants who can invoke it, because copyright’s
various secondary liability tests are far more protective of defendants than its
“strict liability” direct infringement test.94 The result is another strong
pressure to automate. Employees can have volition; computers cannot. It is
not a coincidence that the volitional conduct defense arises only in cases
involving computers.
Online Intermediaries: The safe harbor for online content hosts in
§ 512(c) of the Copyright Act draws on the same ideas. The safe harbor is
available only to “a provider of online services or network access”95 and only
when the provider “does not have actual knowledge”96 of infringement and
“is not aware of facts or circumstances from which infringing activity is
apparent.”97 Again, these tests encourage automation. The threshold
condition makes the safe harbor inapplicable if an enterprise doesn’t use
computers. Once the enterprise uses computers, the knowledge tests
92. CoStar Grp., Inc. v. LoopNet, Inc. 373 F.3d 544, 550 (4th Cir. 2004); accord Fox Broad.
Co. v. Dish Network LLC, 723 F.3d 1067, 1073 (9th Cir. 2013); Cartoon Network LP v. CSC
Holdings, Inc., 536 F.3d 121, 131 (2d Cir. 2008); Parker v. Google Inc., 242 F. App’x 833, 837
(3d Cir. 2007); Disney Enters., Inc. v. Hotfile Corp., 798 F. Supp. 2d 1303, 1308 (S.D. Fla. 2011);
Marobie-FL, Inc. v. Nat’l Ass’n of Fire Equip. Dists., 983 F. Supp. 1167, 1178 (N.D. Ill. 1997);
Playboy Enters., Inc. v. Russ Hardenburgh, Inc., 982 F. Supp. 503, 512 (N.D. Ohio 1997).
93. CoStar Grp., 373 F.3d at 550.
94. Compare Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 919 (2005)
(finding inducement copyright liability requires both the intent to promote infringement and
“clear expression or other affirmative steps taken to foster infringement”), Sony Corp. of Am. v.
Universal City Studios, Inc., 464 U.S. 417, 442 (1984) (holding that the sale of copying
equipment cannot give rise to contributory liability where the device is “capable of substantial
noninfringing uses”), A&M Records, Inc. v. Napster, Inc., 239 F.3d 1004, 1019 (9th Cir. 2001)
(stating that contributory copyright liability requires both knowledge of the infringing activity
and a material contribution to it), and id. at 1022 (stating that vicarious copyright liability
requires both “the right and ability to supervise the infringing activity and . . . a direct financial
interest in [it]” (quoting Fonovisa, Inc. v. Cherry Auction, Inc., 76 F.3d 259, 262 (9th Cir.
1996))), with Jacobs v. Memphis Convention & Visitors Bureau, 710 F. Supp. 2d 663, 677 n.21
(W.D. Tenn. 2010) (stating, in a direct infringement case, “[c]opyright infringement, however,
is at its core a strict liability cause of action, and copyright law imposes liability even in the absence
of an intent to infringe the rights of the copyright holder”); and Fred Fisher, Inc. v. Dillingham,
298 F. 145, 148 (S.D.N.Y. 1924) (holding that defendant could be liable for copying the
plaintiff’s song without realizing it because “[o]nce it appears that another has in fact used the
copyright as the source of his production, he has invaded the author’s rights . . . . It is no excuse
that in so doing his memory has played him a trick”).
95. 17 U.S.C. § 512(k)(1)(B).
96. Id. § 512(c)(1)(A)(i).
97. Id. § 512(c)(1)(A)(ii).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
672 IOWA LAW REVIEW [Vol. 101:657
discourage it from looking too closely at what those computers are doing, lest
it acquire the kind of knowledge that could lead it to lose the safe harbor’s
protections.98 YouTube’s Content ID system for detecting infringing uploads
is the logical extrapolation of this trend: a wholly automated system that takes
humans completely out of the loop.99 The § 512(a) safe harbor for network
operators is even more dramatic: it applies only to “an automatic technical
process” that operates only as “an automatic response.”100 No humans need
apply.
Takedown Notices: The same is true on the other side of the takedown
wars. Copyright owners can in theory face liability if they send takedown
notices falsely alleging that “material or activity is infringing.”101 But since the
test for liability is whether one “knowingly materially misrepresents,” a
copyright owner who avoids knowing that a dodgy notice is false can send it
without fear.102 To be sure, a copyright owner may not simply fire off
takedown notices without considering defenses such as fair use.103 But since
the standard for forming the necessary “good faith belief” of infringement is
subjective rather than objective, any review process at all will suffice.104 The
incentives are obvious. Use robots to identify potentially infringing material,
casting as wide a net as possible, then pass the results by humans for a review
so cursory there is no risk they will notice they are sending a takedown notice
for papers by Professor Peter Usher rather than songs by Usher the
musician.105 A heavily automated process is far less risky than one in which
humans provide meaningful review; indeed, it is best to reduce the humans’
cognitive role to the point that their intervention is indistinguishable from a
cricket jumping up and down on a mouse button.106 In a reductio ad absurdum
98. For a more sophisticated discussion of when it makes sense to impute knowledge to the
operator of a program, see SAMIR CHOPRA & LAURENCE F. WHITE, A LEGAL THEORY FOR
AUTONOMOUS ARTIFICIAL AGENTS 71–118 (2011).
99. See generally How Content ID Works, YOUTUBE, https://support.google.com/youtube/
answer/2797370?hl=en (last visited Nov. 13, 2015).
100. 17 U.S.C. § 512(a)(2–3).
101. Id. § 512(f)(1).
102. Id. § 512(f).
103. Lenz v. Universal Music Corp., 801 F.3d 1126, 1136–37 (9th Cir. 2015).
104. Id. at 1136.
105. See Declan McCullagh, RIAA Apologizes for Threatening Letter, CNET NEWS (May 13, 2003,
6:12 PM), http://www.cnet.com/news/riaa-apologizes-for-threatening-letter.
106. See, e.g., Disney Enters., Inc. v. Hotfile Corp., No. 11-20427-CIV-WILLIAMS, slip op. at
31, 97 (S.D. Fla. Sept. 20, 2013) (describing DMCA takedown issuance process which “relied on
computer automation to execute programs and did not involve human review of the file titles,
page names or other overt characteristics” but declining to rule on existence of a duty for human
review because of evidence that the movie studio “intentionally targeted files it knew it had no
right to remove”).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 673
of this process, laser printers have received takedown notices—robots
accusing each other of copyright infringement.107
Compare this attitude toward robotic readership with copyright’s
treatment of robotic authorship; the scholarly consensus is that computers
can’t be authors, either. The (human) programmer might be an author; the
(human) user might be an author, but not the program that connects them.
Pamela Samuelson argued in 1986 that computers have not been and should
not be treated as authors, because they do not need incentives to create.108
Ralph Clifford similarly argued in 1997 that because computer programs
cannot be “authors” in a statutory sense, computer-created works are
uncopyrightable.109 And in 2012, Annmarie Bridy added that our copyright
system “cannot vest ownership of the copyright” in a computer that “has no
legal personhood.”110 Bridy recommends using the “legal fiction” of the work-
made-for-hire doctrine to avoid the conceptual issues: find a person and
attribute to them ownership of a work they did not actually write.111 Robot
readers can’t infringe, and we won’t let robots be authors, either.112
Copyright is not the only field of law to flirt with the idea that what
happens in silicon stays in silicon. Google has defended itself against privacy
lawsuits by claiming that when it targets advertisements to Gmail users, only
computers, not humans, “read” users’ emails.113 The NSA has likewise argued
107. See Michael Piatek et al., Challenges and Directions for Monitoring P2P File Sharing Networks—
or—Why My Printer Received a DMCA Takedown Notice, 3 PROC. USENIX WORKSHOP ON HOT TOPICS
SECURITY 1, 3 (2008), http://usenix.org/legacy/events/hotsec08/tech/full_papers/piatek/
piatek.pdf; Brad Stone, The Inexact Science Behind D.M.C.A. Takedown Notices, N.Y. TIMES: BITS BLOG
(June 5, 2008, 11:18 AM), http://www.bits.blogs.nytimes.com/2008/06/05/the-inexact-science-
behind-dmca-takedown-notices; cf. Kashmir Hill, After Twitter Bot Makes Death Threat, Its Owner Gets
Questioned by Police, FUSION (Feb. 11, 2015, 8:46 AM), http://www.fusion.net/story/47353/
twitter-bot-death-threat.
108. Pamela Samuelson, Allocating Ownership Rights in Computer-Generated Works, 47 U. PITT.
L. REV. 1185, 1199 (1986).
109. Ralph D. Clifford, Intellectual Property in the Era of the Creative Computer Program: Will the
True Creator Please Stand Up?, 71 TUL. L. REV. 1675, 1682–86 (1997).
110. Annemarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012
STAN. TECH. L. REV. 5, ¶ 51; accord Miller, supra note 80.
111. Bridy, supra note 110, ¶¶ 51–52; cf. id. ¶ 67 (discussing analogous approaches under
U.K., New Zealand, and Irish law, according to which “copyright vests as a matter of law in a party
who is not the author-in-fact”).
112. See Evan H. Farr, Copyrightability of Computer-Created Works, 15 RUTGERS COMPUTER &
TECH. L.J. 63, 79 (1989) (“Giving authorship rights to a computer, however, is absurd . . . .”).
The demands copyright makes of human authors, on the other hand, are notoriously minimal.
See, e.g., Alfred Bell & Co. v. Catalda Fine Arts, Inc., 191 F.2d 99, 102–03 (2d Cir. 1951) (“All that
is needed to satisfy both the Constitution and the statute is that the ‘author’ contributed
something more than a ‘merely trivial’ variation, something recognizably ‘his own.’ Originality
in this context ‘means little more than a prohibition of actual copying.’ No matter how poor
artistically the ‘author’s’ addition, it is enough if it be his own.”).
113. See Bruce E. Boyden, Can a Computer Intercept Your Email?, 34 CARDOZO L. REV. 669, 673–74
(2012); Samir Chopra & Laurence White, Privacy and Artificial Agents, or, Is Google Reading My Email?,
2007 INT’L JOINT CONF. ON ARTIFICIAL INTELLIGENCE 1245.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
674 IOWA LAW REVIEW [Vol. 101:657
that it does not “acquire” private communications unless and until an
employee reads them.114 The rise of high-speed trading algorithms raises
uncomfortable questions about whether a computer can have the requisite
mental state to “knowingly” engage in market manipulation115 or to enter into
an “agreement” to fix prices.116 And Swiss authorities didn’t bother trying to
sort out the philosophical questions posed by a drug-buying robot;117 they
simply seized the robot.118 But to my knowledge, copyright is the only field of
law to so thoroughly and whole-heartedly embrace the idea that robots simply
do not count.
IV. POSTHUMAN COPYRIGHT
Copyright ignores robots. This choice is entirely consistent with
copyright’s theory of the romantic reader. It is amply supported by fair use
doctrine. And it yields sensible results in the cases that have come before the
courts. But there is something unsettling about a rule of law that regulates
humans and gives robots free rein. Most immediately, it encourages people
and businesses to outsource their reading. To the extent that the rule depends
on the inhuman scale of robotic reading, it also encourages them to scale up
their copying. Rebroadcast one radio station for humans and you’re an
114. See Kevin Bankston & Amie Stepanovich, When Robot Eyes Are Watching You: The Law &
Policy of Automated Communications Surveillance 9 (July 2014) (unpublished manuscript),
http://robots.law.miami.edu/2014/wp-content/uploads/2014/07/Bankston_Stepanovich_We_
Robot.pdf. Compare Orin S. Kerr, Searches and Seizures in a Digital World, 119 HARV. L. REV. 531,
548 (2005) (“[A] search of data stored on a hard drive occurs when that data, or information
about that data, is exposed to human observation.”), and Matthew Tokson, Automation and the
Fourth Amendment, 96 IOWA L. REV. 581, 587 (2011) (“Internet users do not suffer a cognizable
privacy harm in the absence of some eventual disclosure to a human observer.”), with Bankston
& Stepanovich, supra, at 3 (“[T]he mere fact that the act of reading the emails is automated does
not decrease the invasiveness of that act, but instead intensifies the privacy invasion by
exponentially increasing the accuracy, speed, and scope of surveillance.”), and Jonathan Zittrain,
Searches and Seizures in a Networked World, 119 HARV. L. REV. F. 83, 90 (2006) (“The shift from local
to network storage also compels skepticism of the idea that mirroring of private data by the
government [i.e., without exposure to a human] is not itself a search.”).
115. See, e.g., Gregory Scopino, Do Automated Trading Systems Dream of Manipulating the Price of
Futures Contracts? Policing Markets for Improper Trading Practices by Algorithmic Robots, 67 FLA. L. REV.
221, 233–34 (2015).
116. See Salil K. Mehra, Antitrust and the Robo-Seller: Competition in the Time of Algorithms, 100
MINN. L. REV. (forthcoming 2015) (manuscript 39–42) (Legal Studies Research Paper Series,
Research Paper No. 2015-15), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2576341.
117. Ryan Calo, A Robot Really Committed a Crime: Now What?, FORBES (Dec. 23, 2014, 5:04 PM),
http://www.forbes.com/sites/ryancalo/2014/12/23/a-robot-really-committed-a-crime-now-what;
see also Mike Power, What Happens When a Software Bot Goes on a Darknet Shopping Spree?, GUARDIAN
(Dec. 5, 2014, 8:56 AM), http://www.theguardian.com/technology/2014/dec/05/software-bot-
darknet-shopping-spree-random-shopper.
118. See Daniel Rivero, Robots Are Starting to Break the Law and Nobody Knows What to Do About
It, Fusion (Dec. 29, 2014, 8:14 AM), http://fusion.net/story/35883/robots-are-starting-to-break-
the-law-and-nobody-knows-what-to-do-about-it.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 675
infringer; copy a thousand TV stations for computers and you’re a fair use
hero.119
This pressure to use robots is indifferent to whether people use robots
for good or for ill. It is easy to see the value of digital humanities research. But
not all robotic reading is so benign, and the logic of nonexpressive use
encourages the circulation of copyrighted works in an underground robotic
economy. Take spambots, which profligately recycle everything from
Shakespeare to sports stories into a semantic soup designed to trick other
robots—spam filters—into showing their emails to a human user.120 If we take
the robotic-reader cases at face value, spam filters are noninfringing fair
users—and so are spambots.121 Perhaps copyright is the wrong tool for
stopping spam,122 but a rule of law giving spambots free rein is certainly an
odd consequence of robotic readership.
The paradox goes deeper. By valorizing robotic reading, copyright
doctrine denigrates human reading. A transformative fair use test that
categorically exempts robots means that a digital humanist can skim a million
books with abandon while a humanist who reads a few books closely must pay
full freight for hers. Romantic readership therefore discourages the personal
engagement with a work that it claims to value. Copyright’s expressive
message here—robots good, humans bad—is the exact opposite of the one it
means to convey.
Indeed, by embracing robotic reading, copyright may also change the
nature of human reading. Robotic reading is a form of automation, and as
such, it must confront familiar critiques of automation’s effects on humans.123
Ask a spell-checker to do your proofreading for you often enough and your
own ability to proofread will atrophy from disuse.124 Google Translate reads
superficially and in fragments; its translations aren’t great, but they’re good
enough to make professional translators worried about the future of their
119. Compare Infinity Broad. Corp. v. Kirkwood, 150 F.3d 104, 106 (2d Cir. 1998) (finding it
was infringement to “enable[] subscribers (for a fee) to listen over the telephone to
contemporaneous radio broadcasts in remote cities”), with Fox News Network, LLC v. TVEyes Inc.,
43 F. Supp. 3d 379, 393 (S.D.N.Y. 2014) (finding it was not infringement to create “a database of
everything that television channels broadcast, twenty-four hours a day, seven days a week”).
120. See generally FINN BRUNTON, SPAM: A SHADOW HISTORY OF THE INTERNET 143–61
(Geoffrey Bowker & Paul N. Edwards eds., 2013).
121. Compare SPAM POETRY INSTITUTE, http://www.spampoetry.org (collecting examples of
accidental poetry in computer-generated emails), with Bridy, supra note 110, ¶¶ 22–40
(describing examples of more deliberate computational creativity).
122. Cf. Rebecca Bolin, Opting Out of Spam: A Domain Level Do-Not-Spam Registry, 24 YALE L. &
POL’Y REV. 399, 413 (2006) (describing failure of Habeas, a business that embedded a
copyrighted haiku in legitimate emails and sued spammers who copied the haiku).
123. See generally NICHOLAS CARR, THE GLASS CAGE: AUTOMATION AND US (2014); JARON
LANIER, YOU ARE NOT A GADGET: A MANIFESTO (2010).
124. CARR, supra note 123, at 65–85 (discussing automation bias, automation complacency,
and degeneration). See generally NICHOLAS CARR, THE SHALLOWS: WHAT THE INTERNET IS DOING
TO OUR BRAINS (2011) (discussing troubling cognitive effects of extensive computer use).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
676 IOWA LAW REVIEW [Vol. 101:657
profession.125 CAPTCHAs and Amazon Mechanical Turk ask humans to read
like robots: superficially, repetitively, and in microscopic bursts.126 None of
these possible futures of reading is particularly appealing from the standpoint
of romantic readership.127
Or look even further ahead. Copyright’s tolerant attitude towards robotic
reading has fueled a global effort to make communications robot-readable.128
All human expression is “grist for the data mill.”129 We are teaching robots to
125. See, e.g., NICHOLAS OSTLER, THE LAST LINGUA FRANCA: ENGLISH UNTIL THE RETURN OF
BABEL, at xix (2010) (predicting that machine translation may “remove[] the requirement for a
human intermediary to interpret or translate”); cf. 17 U.S.C. § 101 (2012) (defining “derivative
work” to include “a translation”). Compare MARTIN FORD, RISE OF THE ROBOTS: TECHNOLOGY AND
THE THREAT OF A JOBLESS FUTURE (2015), and Humans Need Not Apply, C.G.P. GREY (Aug. 13,
2014), http://www.cgpgrey.com/blog/humans-need-not-apply (predicting significant structural
unemployment from computerization), with ERIK BRYNJOLFSSON & ANDREW MCAFEE, THE
SECOND MACHINE AGE: WORK, PROGRESS, AND PROSPERITY IN A TIME OF BRILLIANT TECHNOLOGIES
(2014) (predicting disruptive shifts in employment but more optimistic overall). See generally
FRANK LEVY & RICHARD J. MURNANE, THE NEW DIVISION OF LABOR: HOW COMPUTERS ARE
CREATING THE NEXT JOB MARKET (2012) (discussing types of labor that are and are not vulnerable
to automation); CHRISTOPHER STEINER, AUTOMATE THIS: HOW ALGORITHMS CAME TO RULE OUR
WORLD (2012) (providing case studies of computerization); Carl Benedikt Frey & Michael A.
Osborne, The Future of Employment: How Susceptible Are Jobs to Computerisation?, OXFORD MARTIN
SCH. (Sept. 17, 2013), http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_
of_Employment.pdf.
126. See Ayhan Aytes, Return of the Crowds: Mechanical Turk and Neoliberal States of Exception, in
DIGITAL LABOR: THE INTERNET AS PLAYGROUND AND FACTORY 79, 91 (Trebor Scholz ed., 2013)
(describing “exploitative aspects of cognitive labor arbitrage”); Miriam A. Cherry, Working for
(Virtually) Minimum Wage: Applying the Fair Labor Standards Act in Cyberspace, 60 ALA. L. REV. 1077,
1089–92 (2009) (discussing employment law issues). For an example of human “reading” in an
age of robots, consider the people who are paid to turn the pages for Google’s book-scanning
robots. The mechanical nature of their work excludes this “yellow badge class” from the perks
lavished on Google’s regular white-badge employees; they work in different buildings under top-
secret conditions. See Andrew Norman Wilson, Workers Leaving the Googleplex, ANDREW NORMAN
WILSON, http://www.andrewnormanwilson.com/WorkersGoogleplex.html (last visited Nov 13,
2015). The only traces these readers leave are the occasional photographs of their fingers
flipping pages, snapshots of their invisible labor. See Kenneth Goldsmith, The Artful Accidents of
Google Books, NEW YORKER (Dec. 4, 2013), http://www.newyorker.com/books/page-turner/the-
artful-accidents-of-google-books.
127. See Brett M. Frischmann, Human-Focused Turing Tests: A Framework for Judging Nudging
and Techno-Social Engineering of Human Beings 1–4 (Cardozo Sch. of Law, Jacob Burns Inst. for
Advanced Legal Studies, Faculty Research Paper No. 441, 2014), http://ssrn.com/abstract=
2499760 (describing “systematic approach to identifying when technologies dehumanize” based
on identifying “contexts within which humans are or become indistinguishable from machines”).
128. See BRUNTON, supra note 120, at 110–13 (discussing robot-readable communications);
Matt Jones, The Robot-Readable World, BERG (Aug. 3, 2011), http://berglondon.com/blog/
2011/08/03/the-robot-readable-world.
129. Sag, supra note 22, at 1503. The “mill” metaphor has historical echoes. See Herman
Melville, The Paradise of Bachelors and the Tartarus of Maids, HARPER’S NEW MONTHLY MAG. 670,
675, 676 (1855) (short story describing the “rows of blank-looking girls” who work at a mill
characterized by “the metallic necessity, [and] the unbudging fatality” and the girls make the
machinery into “[t]heir own executioners; themselves whetting the very swords that slay them”).
Melville’s mill was a paper-mill, producing “only blank paper; no printing of any sort”—the raw
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 677
write like us and read like us—sometimes for our own edification or
entertainment, sometimes as a side effect of the global struggle of algorithm
against algorithm for aggregated slivers of human attention. Already,
computers can compose music130 and write news stories.131 What if there
comes a day when they have no further need of our creative facilities at all,
when robots are superintelligent, surpassing human cognitive abilities as we
surpass banana slugs?132
Superintelligent computers would pose an existential risk to humanity;133
an entity with such immense cognitive resources would have the ability to kill
all humans.134 A superintelligent artificial intelligence wouldn’t even need to
bear humanity any ill will to wipe us out as a side effect of pursuing whatever
goals it had been programmed with.135 A traffic-optimizing artificial
intelligence could eliminate traffic jams forever by covering the entire surface
of the planet with highways.136 The great practical challenge of
superintelligence is to solve the secondary problem of how to control a
material for writing. Id. at 676. Melville’s maids bear more than a passing resemblance to Google’s
page-turners. See Wilson, supra note 126.
130. See, e.g., DAVID COPE, COMPUTER MODELS OF MUSICAL CREATIVITY (2005).
131. See Roger Yu, How Robots Will Write Earnings Stories for the AP, USA TODAY (June 30, 2014,
7:00 PM), http://www.usatoday.com/story/money/business/2014/06/30/ap-automated-stories/
11799077.
132. See, e.g., NICK BOSTROM, SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES (2014)
(pessimistic); RAY KURZWEIL, THE SINGULARITY IS NEAR: WHEN HUMANS TRANSCEND BIOLOGY
(2005) (optimistic). The foundational article is Irving John Good, Speculations Concerning the First
Ultraintelligent Machine, 6 ADVANCES COMPUTERS 31 (1965).
133. See Bill Joy, Why the Future Doesn’t Need Us, WIRED (Apr. 1, 2000, 12:00 PM), http://
archive.wired.com/wired/archive/8.04/joy.html (“[W]e are on the cusp of the further
perfection of extreme evil . . . .”).
134. See, e.g., STUART ARMSTRONG, SMARTER THAN US: THE RISE OF MACHINE INTELLIGENCE
33 (2014) (“Imagine yourself as the AI . . . working so fast that you have a subjective year of
thought for every second in the outside world. How hard would it be to overcome the obstacles
that slow, dumb humans—who look like silly bears from your perspective—put in your way?”);
BOSTROM, supra note 132, at 115–26; David J. Chalmers, The Singularity: A Philosophical Analysis,
17 J. CONSCIOUSNESS STUD. 7 (2010); Eliezer Yudkowsky, Artificial Intelligence as a Positive and
Negative Factor in Global Risk, in GLOBAL CATASTROPHIC RISKS 308, 313 (Nick Bostrom & Milan M.
Ćirković eds., 2008).
135. See, e.g., BOSTROM, supra note 132, at 120, 123 (illustrating problem in terms of
“paperclip AI” that maximizes paperclip production “by converting first the Earth and then
increasingly large chunks of the observable universe into paperclips” and of “perverse
instantiation” in which an artificial intelligence with the goal of making humans smile
“[p]aralyze[s] human facial musculatures into constant beaming smiles”). Both examples suffer
from a failure to specify the superintelligent agent’s goals with sufficient precision—a surprisingly
hard task. See generally Stephen M. Omohundro, The Nature of Self-Improving Artificial Intelligence,
SELF AWARE SYSTEMS (2008), http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.137.
1199&rep=rep1&type=pdf (arguing that regardless of its goals an artificial intelligence will be
driven to acquire resources, use them efficiently, preserve its ability to achieve its goals, and take
unexpectedly creative routes to all of the above).
136. The hypothetical is loosely drawn from Lawrence B. Solum, Artificial Meaning, 89 WASH.
L. REV. 69 (2014).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
678 IOWA LAW REVIEW [Vol. 101:657
superintelligent entity and direct it to goals humans can broadly agree on—
and to solve it before someone solves the primary problem of actually making
a superintelligent entity.137
So, we might ask, who decided it would be a good idea to give artificial
intelligence researchers free rein over humanity’s complete creative output?
It is easy to see how bulk nonexpressive copying promotes progress in artificial
intelligence.138 It is much harder to articulate any kind of connection between
such copying and the kind of research needed to guarantee that a
superintelligence respects human goals. So copyright policy here arguably
increases the chances that humanity will meet a sudden, violent, and
extremely unpleasant end.139
This suggestion is necessarily rather tentative because speculation about
superintelligence is highly speculative even by the speculative standards of
speculation. Whether and how it will arise is subject to fundamental
uncertainties;140 if it does come, the future will be “essentially strange and
different.”141 But once we start talking about how copyright applies to the
137. See generally ARMSTRONG, supra note 134; BOSTROM, supra note 132; Eliezer Yudkowsky,
Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures, MACHINE
INTELLIGENCE RES. INST. (2001), http://intelligence.org/files/CFAI.pdf. Armstrong, Bostrom,
and Yudkowsky are deeply worried about the control problem and humanity’s prospects. In a
nutshell, there are good reasons to think that any technical limitations on a superintelligent
agent, such as keeping it in a “box” disconnected from the outside world, are likely to fail, given
its ability to plan and to conceal those plans from the people it interacts with. BOSTROM, supra
note 132, at 129–31. That means the only plausible way to harness it for human good is to give it
goals that are compatible with human conceptions of the good, so that its motivations coincide
with humanity’s. In other words, solve the fundamental philosophical problem of morality, do so
in a way that can be formalized well enough to be encoded in software, and do so in a way that
humans will broadly agree on. No biggie. See generally Eliezer Yudkowsky, Coherent Extrapolated
Volition, MACHINE INTELLIGENCE RES. INST. (2004), http://intelligence.org/files/CEV.pdf
(offering a meta-ethical approach to the problem). For more optimistic takes on the control
problem, see KURZWEIL, supra note 132; and JOHN O. MCGINNIS, ACCELERATING DEMOCRACY:
TRANSFORMING GOVERNANCE THROUGH TECHNOLOGY (2013). For an outsider’s survey of the
superintelligence debates, see generally JAMES BARRAT, OUR FINAL INVENTION: ARTIFICIAL
INTELLIGENCE AND THE END OF THE HUMAN ERA (2013).
138. See Alon Halevy et al., The Unreasonable Effectiveness of Data, IEEE INTELLIGENT SYSTEMS,
Mar./Apr. 2009, at 8–10 (discussing advantages of applying statistical methods to immense
natural-language datasets).
139. Not everyone thinks that the replacement of human beings by artificial beings would be
a bad thing. See, e.g., HANS MORAVEC, MIND CHILDREN 1–2 (1988) (“We humans will benefit for
a time from their labors, but sooner or later, like natural children, they will seek their own
fortunes while we, their aged parents, silently fade away . . . . When that happens, our DNA will
find itself out of a job, having lost the evolutionary race to a new kind of competition.”).
140. See BOSTROM, supra note 132, at 22–104.
141. Vernor Vinge, The Coming Technological Singularity: How to Survive in the Post-Human Era,
1993 VISION-21: INTERDISC. SCI. & ENGINEERING ERA CYBERSPACE 12. (“From the human point of
view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye,
an exponential runaway beyond any hope of control.”) Vinge’s term, “The Singularity,” is a
mathematical metaphor for a point of complete discontinuity at which the rate of change is
infinite and extrapolation becomes impossible. Vinge first suggested the concept a decade
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 679
actions of computer programs, we really ought to follow the arguments
however far they lead. By encouraging robotic reading, copyright law puts its
thumb on the scale on the side of a world where there are nothing but robotic
readers.142 It’s a big scale, and copyright’s thumb is small—but still. Which of
the four fair use factors includes “existential risk to humanity?”143
Perhaps the problem is romantic readership itself. Consider a slightly less
dramatic possible future, one in which artificial intelligence improves only to
the point that robotic readers have roughly human-level capabilities and
regularly pass the Turing Test.144 Romantic readership would ask whether
these robots have subjective experiences of works of authorship. But
subjective experiences are by definition subjective; they are empirically
inaccessible to anyone but the person experiencing them. The rest of us can
observe an entity’s behavior and ask it questions, but there is no test that can
reveal the presence or absence of the personal reaction of an individual
earlier. Vernor Vinge, First Word, OMNI, Jan. 1983, at 10. While it is possible that artificial
superintelligence could arrive without fundamental changes to how humans experience the
universe, see Eliezer Yudkowsky, Three Major Singularity Schools, MIRI (Sept. 30, 2007), https://
intelligence.org/2007/09/30/three-major-singularity-schools, it is highly unlikely without a
good solution to the control problem. For speculative fictional attempts to think through what
unthinkably rapid and complete change would look like from a human perspective, see HANNU
RAJANIEMI, THE QUANTUM THIEF (2011) (most of the billions of conscious entities in the solar
system are emulated human brains held as virtual slaves by superintelligent masters); CHARLES
STROSS, ACCELERANDO (2005) (superintelligent robots dismantling most of solar system to build
more computing devices while humans uploaded into computers gradually depart for other
stars); and VERNOR VINGE, MAROONED IN REALTIME (1986) (handful of survivors left on
depopulated Earth with absolutely no clue what has happened to everyone else). If you take the
Singularity seriously, humanism doesn’t have much of a future.
142. The CONTU felt that “any dehumanizing effects which might be attributable to the
increasing impact of computer users upon society are utterly unrelated to the mode of protection
employed to safeguard program language,” a conclusion that follows only if one believes that
copyright has no influence in encouraging, discouraging, or shaping the adoption of computing
technologies. See NAT’L COMM’N ON NEW TECH. USES OF COPYRIGHTED WORKS, supra note 81, at
26. But if software copyright law has no influence on computing, then the Commission’s
recommendations in favor of software copyright were pointless and everyone involved could have
saved a lot of work.
143. See generally RICHARD A. POSNER, CATASTROPHE: RISK AND RESPONSE (2004) (thinking
about how to avert global disaster); CASS R. SUNSTEIN, WORST-CASE SCENARIOS 1 (2007) (same);
see also GLOBAL CATASTROPHIC RISKS (Nick Bostrom & Milan Ćirković eds., 2008). For a case study
of the mismatch between long-term risks and the legal system’s short-term approach, see Eric E.
Johnson, The Black Hole Case: The Injunction Against the End of the World, 76 TENN. L. REV. 819 (2009).
144. See generally Alan M. Turing, Computing Machinery and Intelligence, 59 MIND 433 (1950).
In one of Turing’s examples, a subject is asked to write a sonnet while in another, the subject
discusses Shakespearean prosody, so the Turing Test puts both authorship and readership in
play. See id. at 434, 446.
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
680 IOWA LAW REVIEW [Vol. 101:657
reader upon a work, and there will never be one.145 So romantic readership
asks a question no one will ever be able to answer.146
From a more utilitarian perspective, there are principled reasons why
passing the Turing Test might be good enough for copyright law—even if you
believe that robots do not and never will have subjective experiences.147 As
noted above, Pamela Samuelson argued against granting copyrights to
computer programs on the basis that they do not need and cannot respond
to copyright’s incentives for creativity. In a world where robots regularly pass
the Turing Test, Samuelson’s incentives point can be turned around. Robots
that act indistinguishably from humans can also be expected to respond
indistinguishably from them in response to legal pressures.148 A robot that says
it cares about not being sanctioned for copying without permission and acts
accordingly is a robot that can effectively be deterred from copying.149 To the
extent that this deterrence advances or inhibits social policies humans care
145. This is a crucial point about the Turing Test: it makes the question of whether machines
think empirically tractable by rephrasing it in behavioral terms. Cf. F. Patrick Hubbard, “Do
Androids Dream?”: Personhood and Intelligent Artifacts, 83 TEMP. L. REV. 405, 421 (2010) (proposing
behavioral standard of self-consciousness for legal personhood because “[t]he behavioral
standard adopted herein sidesteps the issue . . . by focusing on behavior that indicates self-
consciousness, rather than on metaphysical questions concerning the nature of our self-
consciousness”).
146. For a rare article taking romantic authorship seriously but without human chauvinism,
see Dane E. Johnson, Statute of Anne–imals: Should Copyright Protect Sentient Nonhuman Creators?, 15
ANIMAL L. 15 (2008).
147. There are philosophical arguments directed to showing that no robot could ever be
conscious. See, e.g., John R. Searle, Minds, Brains, and Programs, 3 BEHAV. & BRAIN SCI. 417 (1980).
But there are counterarguments directed to showing that the entire question is a red herring. See
Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. REV. 1231, 1281–82
(1992) (discussing the extent to which these philosophical disagreements bear on “pragmatic”
questions of choosing the appropriate legal rule and arguing “that the lack of real intentionality
would not make much difference if it became useful for us to treat AIs as intentional systems in
our daily lives”). The problem with romantic authorship and romantic readership may be that
they are so disconnected from the pragmatic questions a copyright system actually faces that they
do not provide useful guidance, either in the mine-run of cases today or in the more speculative
cases of the future. Cf. Toni M. Massaro & Helen Norton, Siri-ously?, 110 NW. U. L. REV.
(forthcoming 2015).
148. See DANIEL C. DENNETT, THE INTENTIONAL STANCE 29 (1989) (discussing arguments for
attributing beliefs to entities on the basis of their behavior); see also Solum, supra note 147, at
1269 (“If the practical thing to do with an AI one encountered in ordinary life was to treat it as
an intentional system, then the contrary intuition generated by Searle’s Chinese Room would not
cut much legal ice.” (footnote omitted)). For an application of Dennett’s theory to artificial
entities in legal contexts, see generally CHOPRA & WHITE, supra note 98. For a particularly detailed
and sophisticated treatment of the consequences of applying the intentional stance to computer
systems, see Giovanni Sartor, Cognitive Automata and the Law, 17 ARTIFICIAL INTELLIGENCE & L.
253 (2006).
149. Similarly, treating robots as potential authors could increase the supply of works because
a robot that says it cares about being rewarded for its creativity and acts accordingly is a robot that
can be incentivized to create. An early version of this argument can be found in Karl F. Milde, Jr.,
Can a Computer Be an “Author” or an “Inventor”?, 51 J. PAT. OFF. Soc’y 378, 390 (1969).
A5_GRIMMELMANN.DOCX (DO NOT DELETE) 12/10/2015 1:00 PM
2016] COPYRIGHT FOR LITERATE ROBOTS 681
about, such as providing public access to works through appropriate
incentives for authors, copyright consequentialists should make the decision
on that basis. At the end of the day, romantic readership does not take robotic
readership seriously—but we should.
V. CONCLUSION
Robotic readers are here and walk among us. Indeed, if you count by the
total number of words read, robotic reading is now overwhelmingly more
common than human. Search engines crawl the Internet ceaselessly, reading
hundreds of millions of obscure pages from start to finish, again and again
and again. Quietly, almost invisibly, copyright law has accommodated itself to
these robotic readers. The rule is surprising. Robotic readers get a free pass
under the copyright laws. Copyright is for humans only.
My point is not that there is something wrong with this result; doctrinally,
it strikes me as impeccably correct in the cases that have come before the
courts. Rather, paying attention to robotic readership refocuses our attention
on the really fundamental questions: what is copyright, and what is it for? To
say that human readers count and robots don’t is to say something deep about
the nature of reading as a social practice, and about what we want robots—
and humans—to be.