In L'Immatériel, Gorz distinguished intelligence from knowledge as two fundamentally different modes of human cognitive engagement. Knowledge is formal, codifiable, transferable — the kind of information that can be written down, stored, transmitted without loss across time and space. Intelligence incorporates the affective, the relational, the embodied — the full range of human cognitive and emotional capacities developed through lived experience, which cannot be extracted from the person who possesses them. The distinction has become indispensable for analyzing what contemporary AI systems do and do not do, because it locates the difference with a precision that arguments about consciousness lack.
There is a parallel reading that begins not from philosophical distinctions but from the material substrate of AI development: the massive extraction of human intelligence-as-data that makes these systems possible. What Gorz calls 'intelligence' — the embodied, relational, affective dimension — is not actually absent from large language models. It has been extracted, aggregated, and reconstituted from millions of hours of human cognitive labor embedded in text. Every Reddit conversation about heartbreak, every Stack Overflow explanation of why code should be structured a certain way, every carefully crafted email navigating workplace politics — these contain traces of precisely the relational, affective intelligence Gorz claims cannot be codified.
The real dynamic is not that AI lacks intelligence while humans retain it, but that AI systems represent the crystallization of collective human intelligence stripped of its attribution and compensation. The engineer whose judgment about 'what users need' seems irreplaceable today is actually training her replacement every time she explains her reasoning to an AI assistant, every time she corrects its output, every time her interactions become part of the training data for the next model. The preservation of 'intelligence' as a separate human domain is not a philosophical given but a temporary market inefficiency — one that the same forces driving the knowledge commodification will eventually close. What appears as an irreducible human capacity is simply the not-yet-extracted remainder. The conditions for developing intelligence aren't being protected; they're being systematically mined. The 'slow relationships' and 'unproductive wandering' that Gorz valorizes as sources of intelligence are themselves being datafied through social media, wearables, and ambient computing, transformed into the raw material for the next generation of systems that will claim to understand human needs better than humans themselves.
The large language model is the most powerful knowledge-processing system ever constructed. It retrieves, organizes, synthesizes, and generates formal knowledge with a speed and comprehensiveness no human can match. But it does not possess intelligence in Gorz's sense. It lacks the embodied, affective, relational understanding that emerges from the experience of being a living creature among other living creatures. The Orange Pill's argument about consciousness gestures toward this distinction — consciousness as 'the thing that wonders, the thing that asks why' — but Gorz's formulation is more analytically precise because it does not depend on contested claims about subjective experience.
The distinction has immediate implications for the AI transition. If the value of human contribution shifts, as The Orange Pill argues, from execution to judgment, then the value shifts from knowledge to intelligence — from the formal capacities AI can replicate to the relational capacities it cannot. The engineer whose value lay in her knowledge of programming languages finds that knowledge commoditized. The engineer whose value lies in her intelligence — her understanding of what users need, her judgment about what is worth building — finds that intelligence more valuable than ever.
But here is the difficulty Gorz would press: intelligence, unlike knowledge, cannot be acquired on demand. It is developed over years of engaged, embodied, relational experience — the kind of experience that the AI tools, by removing the friction of production, may be systematically undermining. The engineer who spends every available moment building with AI tools is developing her productive capability at the expense of the relational, embodied, affective intelligence that gives her production direction and purpose.
The intelligence that remains valuable in the AI age is precisely the kind of cognitive capacity that is hardest to cultivate under AI-saturated conditions. This is not a paradox that can be resolved by better time management. It requires structural protection of the conditions under which intelligence develops: time for embodied experience, for slow relationships, for the unproductive wandering from which genuine understanding emerges.
Gorz developed the distinction in L'Immatériel (2003) through engagement with the autonomist tradition and with critiques of cognitive capitalism. The formulation draws on phenomenological philosophy — particularly Merleau-Ponty on embodied cognition — while giving these influences a specifically economic inflection.
Two modes of cognition. Knowledge is extractable information; intelligence is embodied relational capacity.
AI replicates knowledge. Large language models are the most powerful knowledge-processing systems ever built.
AI cannot replicate intelligence. What survives extraction is knowledge; what is lost in extraction is intelligence itself.
Intelligence is what remains valuable. As knowledge is commoditized, the premium shifts to the irreducible capacities.
Conditions for intelligence are threatened. The very tools that elevate intelligence's value may undermine the conditions of its development.
Some cognitive scientists argue that the distinction between knowledge and intelligence is less sharp than Gorz suggests, and that embodied intelligence itself may eventually be replicable by AI systems trained on appropriate data. Gorzian defenders respond that even if this becomes possible, the structural question of who controls such systems and for what purposes remains.
The fundamental tension here concerns what exactly is being extracted and commodified in AI systems. If we ask 'Can AI replicate formal knowledge?' both views agree completely (100% convergence): yes, and with unprecedented power. If we ask 'Does AI currently possess embodied, relational intelligence?' again there's agreement (100%): no, it processes patterns without lived experience. But if we ask 'What is AI actually extracting from human output?' the perspectives diverge sharply — Gorz's frame suggests only knowledge can be extracted (100%), while the contrarian reading shows that traces of intelligence are being continuously harvested through our interactions (80% contrarian weighting).
The question of replaceability reveals another layer of complexity. When we examine immediate technical capabilities, Gorz's distinction holds firm (90% Gorz): the engineer's embodied judgment about user needs genuinely cannot be replicated by pattern matching alone. But when we consider the longer trajectory of extraction and aggregation, the contrarian view gains force (70% contrarian): each interaction with AI systems provides training data that incrementally captures aspects of that judgment. The timeline matters enormously here — what seems irreducible today may be approximated tomorrow through sufficient data accumulation.
Perhaps the synthesis requires recognizing that intelligence exists on a spectrum of extractability rather than as a binary. Some forms — deep cultural intuition, aesthetic judgment emerging from decades of experience — resist extraction even with massive data (85% Gorz). Others — emotional patterns, social navigation, even certain forms of creativity — are already being partially extracted and reconstituted (65% contrarian). The crucial question becomes not whether intelligence can be extracted, but which forms of intelligence we choose to protect from extraction, and whether we retain the collective power to enforce those protections. The real frame may be political rather than philosophical: not what cannot be commodified, but what we refuse to let be commodified.