The letter was organized by the Authors Guild, the largest professional organization for writers in the United States, and addressed to the chief executives of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft. Its demands were minimal: consent, credit, and compensation — the same three terms that have governed the use of creative work in the publishing economy for generations. The letter was the mildest form of collective action available, the form the framework knitters exhausted before turning to direct action. Its signatories were not breaking anything. They were asking, through the most polite available channel, for the recognition of a principle so basic that its contested status reveals the depth of the institutional failure: the principle that the use of a person's labor to enrich a corporation requires the person's consent.
The letter followed the discovery that the Books3 dataset — a corpus of approximately 196,000 books obtained from shadow library Z-Library — had been used to train several major language models including Meta's LLaMA. The dataset had been uploaded to the Pile, a widely-used training corpus, without author permission. Individual authors searching tools like The Atlantic's Books3 search discovered their work had been used without their knowledge.
The response from AI companies was minimal. None accepted the letter's three demands as a framework for negotiation. OpenAI continued to argue that its training practices constituted fair use. Meta argued that Books3 was one small part of a larger training corpus. No company offered the consent, credit, or compensation the letter had requested.
The letter's effectiveness must be measured against Thompson's framework for what improvised collective action can and cannot achieve. It compelled attention — media coverage, congressional interest, industry panels. It did not compel structural change. No legislation followed. No industry standard emerged. No mechanism for the authors' ongoing representation in AI governance decisions was created.
In the following year, the Authors Guild pursued more aggressive action, filing a class-action lawsuit against OpenAI in September 2023 with named plaintiffs including John Grisham, George R.R. Martin, and Jodi Picoult. The lawsuit joins Andersen v. Stability AI as a legal test of whether existing copyright doctrine can accommodate the unprecedented scale of AI training data acquisition.
The letter was published July 18, 2023, on the Authors Guild website and in multiple literary publications. Signatures continued to accumulate in the following weeks, eventually exceeding 10,000.
Minimum demands. Consent, credit, compensation — the three most basic protections, not radical requirements.
Petition as mild action. The letter was the most polite form of collective response available, preceding any direct or legal action.
Institutional failure exposed. That such minimal demands were contested reveals how completely formal governance has failed to address AI training data acquisition.
Escalation pathway. The letter was followed by class-action litigation, illustrating the predictable progression from petition to direct action when demands are ignored.