The latest dispute between the New York Times and OpenAI reinforces the distinction in understanding artificial intelligence (AI) between autonomy and automatons, which we have previously examined.
The Gray Lady turned heads late this past year when it filed suit against OpenAI, alleging that the artificial intelligence giant’s ChatGPT software infringed its copyrights. Broadly speaking, the Times alleged that the famous chatbot gobbled up enormous portions of the newspaper’s text and regurgitated it
Earlier this month, OpenAI struck back, arguing that the Times’ suit lacked merit and that the Gray Lady wasn’t “telling the full story.” So who’s right?
To help understand the dispute, the autonomy-automaton dichotomy goes a long way. Recall that many AI enthusiasts contend that the new technology has achieved, or is approaching, independent activity, whether it can be described as what I previously labeled “a genuinely autonomous entity capable (now or soon) of cognition.” Into this school of thought fall many if not most OpenAI programmers and executives, techno-optimists like Marc Andreesen, and inventors and advocates for true AI autonomy like Stephen Thaler.
Arrayed against these AI exponents are the automaton-ers, a doughty bunch of computer scientists, intellectuals, and corporate types who consider artificial intelligence a mere reflection of its creators, or what I’ve called “a representation or avatar of its programmers.”
As we’ve seen, this distinction permeates the legal and policy debates over whether robots can be considered inventors for the purposes of awarding patents, whether they possess enough independence to warrant copyright protection as creators, and what rights and responsibilities should be attributed to them.
The same dichotomy applies to the Times–OpenAI battle. In its complaint, the newspaper alleged that ChatGPT and other generative AI products “were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.” The complaint also claimed that OpenAI’s software “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” In short, the Times contended that ChatGPT and its ilk, far from creating works independently, copies, mimics, and generates content verbatim—like an automaton.
Finally, the Gray Lady argued in its complaint that OpenAI cannot shelter behind the fair use doctrine—which protects alleged copyright infringers who copy small portions of text, do not profit by them, or transform them into something new—because “there is nothing ‘transformative’ about” its use of the Times’s content. Denying that AI can genuinely create something new is a hallmark of the automaton mindset.
In contrast, in strenuously denying the NYT’s allegations, OpenAI expressly embraced autonomous themes. “Just as humans obtain a broad education to learn how to solve new problems,” the company said in its statement, “we want our AI models to observe the range of the world’s information, including from every language, culture, and industry.” Robots, like people, perceive and analyze data in order to resolve novel challenges independently.
In addition, OpenAI contended that “training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents.” From this perspective, exposing ChatGPT to a wide variety of publicly available content, far from enabling the chatbot to slavishly copy it, represents a step in training AI so that it can generate something new.
Finally, the AI giant downplayed the role of mimicry and verbatim copying trumpeted by the Times, asserting that “‘regurgitation’ is a rare bug that we are working to drive to zero” and characterizing “memorization [as] a rare failure of the learning process that we are continually making progress on.” In other words, even when acknowledging that, in certain limited circumstances, the Times may be correct, OpenAI reinforced the notion that AIs, like humans, learn and fail along the way. And to wrap it all in a bow, the company emphasized the “transformative potential of AI.”
Resolution of the battle between the automaton perspective exhibited by the Times and the autonomy paradigm exemplified by Open AI will go a long way to determining who will prevail in the parties’ legal fight.
A really balanced an informative piece showing the two different points of view. It’s nice to see something explain the situation without taking sides and pointing fingers in this issue.
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft