In his April 2018 Washington Post essay, Giddens proposed that AI governance required a charter comparable in ambition to the Magna Carta — a foundational document constraining the sovereigns of the digital era, which he identified as the large technology companies. The proposal combined three principles: that AI should be developed for the common good, that it should operate on principles of intelligibility and fairness, and that it should never be given autonomous power to hurt, destroy, or deceive. The proposal has been largely overtaken by the pace of technological development — a fact that confirms rather than refutes Giddens's underlying structural analysis of temporal mismatch.
The essay appeared while Giddens was serving on the House of Lords Select Committee on Artificial Intelligence, which had interviewed sixty experts and 280 witnesses over nine months and would publish its report, AI in the UK: Ready, Willing and Able?, in April 2018. The essay distilled the committee's work into principles framed for a broader public.
The Magna Carta analogy was deliberately provocative. Giddens's claim that 'today, the new kings are big tech companies' recast the governance problem in civilizational rather than regulatory terms. The original Magna Carta was not a piece of policy but a foundational constraint on sovereign power, agreed under conditions where the sovereigns themselves recognized the necessity of constraint. Giddens's proposal implied that AI governance required similar structural change, not merely regulatory adjustment.
The three principles — common good, intelligibility and fairness, no autonomous harm — map onto Giddens's own theoretical framework. Common good addresses distribution under conditions of democratization. Intelligibility addresses the access-point problem central to active trust in abstract systems. The prohibition on autonomous harm addresses the boundary between human agency and machine autonomy.
The proposal's subsequent fate illustrates the structural problem it was designed to address. The tools that crossed the threshold documented in The Orange Pill did not exist in their mature form when the essay was written. The governance framework Giddens proposed in 2018 was already being outpaced by technology by 2023, and has been substantially outpaced by 2026. This is not a failure of the analysis; it is confirmation of the risk-society prediction that institutional response routinely lags manufactured risk.
The proposal was articulated in Giddens's Washington Post essay of April 2018, published contemporaneously with the House of Lords Select Committee report on AI. It drew on Giddens's decades of work on governance in conditions of manufactured uncertainty and on his collaboration with fellow committee members including Martin Rees and Patrick Vallance.
Civilizational scale. The proposal framed AI governance as a civilizational rather than merely regulatory problem, requiring foundational constraints rather than technical policy adjustments.
Three principles. Common good, intelligibility and fairness, prohibition on autonomous harm — the substantive content of the proposed charter.
Big tech as new sovereigns. The diagnosis that large technology companies wield a form of sovereign power that individual nation-states cannot adequately constrain.
Temporal vindication. The proposal's subsequent outpacing by technology confirms rather than refutes the underlying risk-society analysis.
Global coordination requirement. The charter-level framing implied the necessity of international coordination that existing institutional forms have not produced.
Whether the Magna Carta analogy is analytically illuminating or merely rhetorical has been debated. Critics argue that the original Magna Carta required military defeat of the sovereign; proponents argue that the analogy captures the civilizational scale of the governance challenge.