The Protection frame is the second master frame Lakoff's analytical method identifies as dominating contemporary AI governance discourse. Within this frame, AI is not merely an extension of capability but an intrusion into domains previously reserved for human agency — thought, creativity, judgment, the capacity to understand and be understood. The values threatened by this intrusion — depth, craft, embodied expertise, the social bonds formed through shared struggle — are worth preserving, and preservation requires deliberate institutional resistance to the logic of acceleration. The frame entails that the appropriate posture is caution: precautionary regulation, investment in the human capacities AI threatens, cultural resistance to the assumption that faster is better. The frame generates specific policy positions that the Progress frame cannot easily produce.
Within the Protection frame, regulation should be strong because the technology's potential for harm is unprecedented and the market has no mechanism for valuing what is lost. Education should focus on the capacities AI cannot replicate — embodied skills, critical evaluation, the slow development of judgment through experience. Displacement is not temporary but structural, because the capabilities being automated are not routine tasks but cognitive work previously considered uniquely human. Transition costs are distributed unequally, falling on populations with the least capacity to bear them, and require institutional intervention rather than market-driven management. Each position follows from the frame's core entailment: that some goods are not aggregable and cannot be traded off against capability expansion, because their destruction is categorical rather than quantitative.
The frame draws cognitive authority from a different reading of historical evidence than the Progress frame. Previous technological revolutions did produce broadly shared prosperity eventually, but only after generations of struggle, institutional construction, and deliberate political intervention. The broadly shared prosperity was not a spontaneous product of technological advancement but the outcome of labor movements, regulatory frameworks, educational investments, and cultural norms that channeled technology's gains toward broad benefit. Within the frame, those institutional achievements were won against fierce resistance from the beneficiaries of unconstrained technological deployment, and they must be won again for each subsequent transition. The work of protection is neither optional nor self-executing; it is the specific political labor through which capability expansion is converted into human flourishing.
The frame captures features of the AI transition that the Progress frame renders invisible: the erosion of embodied expertise, the hollowing out of professional identities, the attenuation of social bonds formed through shared cognitive labor, the transfer of judgment from humans to systems whose reasoning is opaque even to their builders. It takes these features seriously as categorical losses rather than as frictional costs in an otherwise beneficial trajectory. This seriousness is the frame's distinctive contribution. It generates the diagnostic work — Byung-Chul Han's critique of smoothness, the elegists of professional craft, the defenders of slow education — that the Progress frame dismisses as nostalgia.
The frame also carries its own blind spots. By foregrounding what is being lost, it can background what is being made possible: the expansion of capability to populations previously excluded, the democratization of production, the intellectual collaboration across domains that previously could not communicate. Within the frame, these gains can feel like distractions from the protective work, or like compensations the frame must discount to maintain its moral clarity. The risk is that the frame, while capturing real losses, cannot accommodate real gains, producing a diagnosis that is acute but incomplete. The emerging Cultivation frame represents an attempt to hold both in view simultaneously — to preserve what matters while cultivating what the technology makes possible.
The Protection frame for AI draws on older traditions of technological critique reaching back to Lewis Mumford, Jacques Ellul, and the Frankfurt School, updated through the work of contemporary critics including Byung-Chul Han, Evgeny Morozov, and the AI ethics community that emerged in the 2010s. It intensified in the early 2020s as the capability thresholds of contemporary language models revealed the specific nature of what was being automated.
AI as intrusion into human domains. The frame positions AI as entering territories — thought, judgment, creativity — previously reserved for embodied human agency.
Categorical losses. Some goods are not aggregable; their destruction cannot be offset by capability expansion in other domains.
Strong regulation as default. The market cannot value what is lost; institutional intervention is required to preserve what the frame identifies as worth preserving.
Historical contingency of progress. Broadly shared prosperity from previous revolutions was won through institutional labor, not produced automatically by technology.
Distributional seriousness. Transition costs fall unequally; the frame takes this seriously rather than averaging it into aggregate outcomes.