The House of Lords Select Committee on Artificial Intelligence was established in June 2017 and reported in April 2018, after nine months of work that included interviews with sixty expert witnesses and evidence from some 280 contributors. Baron Giddens served as a member, bringing his decades of sociological work on risk, trust, and institutional reflexivity to bear on a technology whose most consequential developments were still seven years in the future. The committee's report, AI in the UK: Ready, Willing and Able?, remains among the most thoughtful early institutional responses to the AI governance challenge, and its subsequent obsolescence illustrates rather than refutes the structural analysis Giddens himself had been developing for decades.
The committee was chaired by Lord Clement-Jones and included members with backgrounds in science, policy, law, and ethics. Its mandate covered AI's economic, social, ethical, and political implications. The breadth of the mandate was both its strength — producing a comprehensive rather than narrowly technical report — and its difficulty, requiring the committee to form judgments across domains in which its members were not individually specialist.
Giddens's stated aim on the committee was to 'distinguish, as much as possible, the hype and more remote, apocalyptic visions of digital transformations from real dangers.' The phrasing revealed the characteristic sociological stance: skeptical of both utopian and catastrophist framings, committed to structural analysis grounded in evidence. The committee's work reflected this stance throughout.
The report's recommendations covered education, workforce adaptation, data governance, AI ethics, and sector-specific applications. It proposed an AI Code combining principles of common good, intelligibility, and prohibition on autonomous harm — principles Giddens subsequently elaborated in his Magna Carta for the Digital Age essay. The report was widely praised at the time of publication for its thoughtfulness and its refusal to oversimplify.
The subsequent obsolescence of many of the report's specific recommendations — produced before large language models demonstrated the capabilities that crossed the threshold in 2025 — illustrates the temporal mismatch Giddens's own framework had long diagnosed. The most careful institutional response British governance could produce in 2018 was already being outpaced by the technology by 2023. This is not a criticism of the committee but a structural observation about the relationship between institutional reflexivity and technological acceleration.
The committee was established by the House of Lords in June 2017 with a mandate to investigate the economic, social, ethical, and political implications of advances in artificial intelligence. Its membership included Baron Giddens alongside other life peers with relevant expertise.
Early institutional response. Among the first parliamentary committees globally to investigate AI governance comprehensively.
Hype-risk distinction. Committee work explicitly aimed at separating apocalyptic and utopian framings from structural analysis of real risks.
AI Code proposal. Recommendation of principles combining common good, intelligibility and fairness, and prohibition on autonomous harm.
Temporal obsolescence. Many specific recommendations overtaken by subsequent technological development, illustrating structural analysis of institutional lag.
Giddens's theoretical application. The committee service represented Giddens applying his decades of sociological work to a concrete governance challenge.
Whether committee-based governance mechanisms can ever match AI's pace of development, or whether fundamentally new institutional forms are required, is the central practical question raised by the committee's trajectory.