In June 2024, former OpenAI employees including Daniel Kokotajlo, William Saunders, and several anonymous colleagues published an open letter calling for a 'right to warn' about advanced AI. They argued that AI companies have strong financial incentives to avoid effective oversight, that existing whistleblower protections are insufficient because they focus on illegal activity while AI risks are mostly not yet regulated, and that non-disparagement agreements and equity-based retaliation mechanisms actively suppress the flow of safety-relevant information from inside the companies to the public. Lawrence Lessig agreed to represent the signatories pro bono. His argument: 'Employees are an important line of safety defense, and if they can't speak freely without retribution, that channel's going to be shut down.' The proposed right to warn is a legal intervention designed to support a professional norm — a concrete example of the multi-modal governance Lessig's framework prescribes.
The right to warn proposal emerges from the structural problem Lessig has identified across his career: ethical norms, without institutional support, collapse under market pressure. Employees at AI labs routinely sign agreements that penalize disclosure — sometimes through loss of vested equity worth millions of dollars, sometimes through non-disparagement clauses, sometimes through the implicit understanding that speaking publicly ends one's career in the industry.
The proposed protections include four elements. First, companies would not enforce agreements that prohibit disparagement or risk-related criticism, nor retaliate against employees for such criticism through loss of equity or other means. Second, companies would facilitate anonymous reporting channels to boards, regulators, and appropriate independent organizations. Third, companies would support a culture of open criticism, subject to reasonable protection of trade secrets. Fourth, companies would not retaliate against employees who, after other channels have failed, share risk-related confidential information publicly.
The proposal sits within the broader architecture of multi-modal governance. The legal component (enforceable right against retaliation) supports a norm (the professional ethic of responsible development) against pressure from markets (the commercial incentive to suppress bad news) and architecture (the contractual structures currently silencing dissent). Each modality reinforces the others.
Lessig's involvement continues his long-standing commitment to whistleblower protection. He has supported Daniel Ellsberg, Edward Snowden, and others whose disclosures served the public interest at significant personal cost. The AI application extends the framework to a new domain where the information asymmetry between industry insiders and the public is particularly severe and where the stakes of safety failures are potentially catastrophic.
The open letter 'A Right to Warn about Advanced Artificial Intelligence' was published on June 4, 2024, signed by former and current employees of OpenAI, Google DeepMind, and Anthropic, with endorsements from Yoshua Bengio, Geoffrey Hinton, and Stuart Russell. Lessig's public support followed immediately, including an offer of pro bono legal representation. The proposal has since been incorporated into multiple legislative efforts, including elements of California's SB 1047 (which Lessig also endorsed) and subsequent federal proposals.
Norms require structural support. The professional ethic of responsible AI development cannot hold against market pressure without legal protection.
Existing whistleblower law is inadequate. Standard protections focus on illegal activity; most AI risks are legal but potentially catastrophic.
The contractual architecture is suppressive. Non-disparagement agreements and equity clawbacks are architectural regulation enforcing silence.
Four specific protections. Non-enforcement of suppressive agreements; anonymous reporting channels; cultural support for criticism; protection of last-resort public disclosure.
Multi-modal governance in practice. Law supporting norms against pressure from markets and architecture — the Lessig framework applied to a specific intervention.