Class analysis of technology insists that understanding any technological transformation requires attending to the class structure in which it operates — who develops it, who deploys it, who captures its gains, who absorbs its costs, and how these distributions are produced and maintained. The framework treats technology neither as neutral tool (available to all, blamed for nothing) nor as autonomous agent (driving history independently of human choices) but as social product whose trajectory is shaped by the interests of the actors with power over its development and deployment. Ehrenreich's career was, in one reading, a sustained application of this framework to successive technological and economic transitions. The AI discourse has avoided the framework almost entirely, preferring the autonomous-technology narrative that lets the actors capturing the gains disappear from analysis.
There is a parallel reading that begins not with class relations but with the physical requirements of computation itself. The development and deployment of AI systems requires massive data centers, rare earth minerals, electrical grids, and cooling infrastructure — material dependencies that create their own logic of concentration independent of class dynamics. This reading suggests that even if we achieved perfect worker ownership of AI firms tomorrow, the substrate requirements would still drive toward centralization, environmental extraction, and geographic inequality. The materiality of computation imposes constraints that no amount of class consciousness can overcome.
Read through this lens, the class-analytical framework mistakes symptom for cause. The concentration of AI development in well-capitalized firms isn't primarily about class power but about the irreducible physics of training large models — the need for thousands of GPUs running in parallel, the expertise required to operate them, the network effects that make larger models more valuable. Similarly, the externalization of costs to invisible labor isn't just capitalist exploitation but reflects the fundamental brittleness of pattern-matching systems that require constant human correction. The framework's focus on "who decides" obscures the more troubling possibility that the decisions are already made by the material requirements of the technology itself. We can redistribute the gains from AI, but we cannot redistribute the rare earth mines, the water tables depleted by cooling systems, or the atmospheric capacity to absorb the carbon emissions. The class analysis assumes technology is plastic to social will, but computation's material basis suggests otherwise — some trajectories are locked in by physics, not politics.
The framework's core move is to refuse the standard question — is this technology good or bad? — and replace it with the class-analytical questions: Good for whom? Bad for whom? Who is making those distributions, and who could make them differently? These questions do not dismiss technological effects as epiphenomenal. They insist that technological effects are always mediated by the class structure within which the technology operates, and that the mediation is not a bug but the primary mechanism.
Applied to the AI transition, the framework produces the diagnosis this book develops. The productivity gains are real. The cost externalization is real. Both realities are shaped by the class structure of the technology economy — the concentration of AI development in a small number of well-capitalized firms, the absence of meaningful worker power in those firms, the structural pressures that convert productivity gains into headcount reduction rather than expanded ambition, the global distribution of the invisible labor on which AI systems rest.
The framework's absence from the AI discourse is conspicuous. The dominant analytical categories — capability expansion, alignment, safety, democratization — treat AI as autonomous force whose effects unfold according to technical logic rather than social choice. This framing serves the actors developing and deploying AI, because it removes them from the analysis. The class-analytical framing restores them, asking what interests shape AI's design and deployment and whose voices are excluded from decisions about its trajectory.
Ehrenreich would have argued that class analysis is not an alternative to technical analysis but its necessary complement. Understanding what AI can do requires technical knowledge. Understanding what AI will do — what deployments will actually emerge, what effects will actually propagate — requires understanding the social and economic structures within which technical capability becomes social outcome. The AI discourse has the technical analysis. It needs the class analysis. And the class analysis requires the instruments Ehrenreich spent fifty years sharpening.
The framework has deep roots in Marxist analysis of the means of production (Marx's Capital, Harry Braverman's Labor and Monopoly Capital). Ehrenreich's specific version synthesized Marxist class analysis with feminist analysis of reproductive labor, the sociology of professions, and immersive journalism.
Its most systematic contemporary formulations include Daron Acemoglu and Simon Johnson's Power and Progress (2023), which applies the framework to the full sweep of technological history from agriculture to AI, and the extensive literature on digital labor and platform capitalism developed by scholars including Mary Gray, Siddharth Suri, Trebor Scholz, and Nick Srnicek.
Technology as social product. Technologies are shaped by, and shape, the class structures within which they develop — neither neutral tools nor autonomous agents.
Distributional questions primary. Who benefits, who pays, and how those distributions are produced are analytically prior to questions about technical capability or social effect.
Mediation as mechanism. Technological effects are always mediated by class structure — the mediation is not a distortion of technical logic but the primary mechanism by which technology produces social outcomes.
Autonomous-technology narrative as ideology. The framing of technology as independent force that humans must adapt to serves the interests of actors capturing the gains by removing them from analysis.
Technical analysis as complement. Class analysis does not replace technical understanding but completes it — understanding what AI will do requires understanding both capability and structure.
The tension between class analysis and material constraints dissolves when we recognize that both operate simultaneously but at different scales. For immediate questions about AI deployment — which jobs get automated first, who captures productivity gains, how training data is sourced — the class-analytical framework dominates (80%). These are fundamentally decisions about resource allocation within existing technical constraints, and here Ehrenreich's insistence on asking "good for whom?" provides essential clarity. The autonomous-technology narrative really does serve to obscure the human choices being made daily in Silicon Valley boardrooms.
For questions about AI's long-term trajectory and fundamental limits, the material substrate view gains force (70%). The physics of computation does impose hard constraints — energy requirements scale super-linearly with model size, rare earth supplies are genuinely limited, certain architectures require centralization. But even here, class analysis matters for how we respond to these constraints. The choice between accepting environmental costs versus limiting AI capabilities is ultimately political, shaped by who has power in the decision.
The synthetic frame recognizes technology as doubly constrained — by both social structure and physical reality. The class-analytical framework correctly identifies that within physical constraints, enormous latitude exists for different social arrangements, and these arrangements are currently determined by existing power structures rather than democratic choice. The material substrate view correctly identifies that some constraints cannot be overcome by reorganizing ownership or power. The complete analysis requires mapping both boundaries: what physics makes impossible and what politics makes seem inevitable. This dual mapping reveals the actual space for human agency — smaller than pure class analysis suggests, larger than material determinism implies.