In The Dispossessed, Anarres's second generation inherits anarchism as environment rather than achievement. They never experienced hierarchy, so they cannot recognize when informal hierarchies form. They never chose solidarity, so they cannot see when solidarity calcifies into conformity. The first generation's revolution (conscious, costly, chosen against an alternative) becomes the second generation's default (unconscious, inherited, experienced as reality). Without contrast, walls are invisible. Applied to AI, the second-generation problem is students and early-career practitioners who have always had AI assistance. The friction that built their predecessors' capacities (struggling with blank pages, debugging manually, reading cases closely) is not something they were liberated from — it is something they never encountered. The developmental environment has changed. Capacities requiring that environment do not form. The practitioners are not deficient; the conditions are. And the deficit is invisible to those experiencing it, because they have no basis for comparison. This is the AI transition's deepest risk: not displacement of the current generation but the failure of the next generation to develop capacities the culture assumes are human universals but are actually developmental achievements requiring specific environmental conditions.
The second-generation problem operates through a three-stage mechanism. First, the liberation: AI removes the struggle with implementation, with syntax, with the mechanical translation of intention into executable code. The removal is genuine. The freed developers can focus on higher-order work (architecture, product decisions, user needs) without the implementation bottleneck. Second, the inheritance: the next cohort enters a world where the struggle never existed. AI has always been available. The blank prompt is their baseline, not a liberation. They never debugged manually for three years; they never built the architectural intuition that debugging deposits. Third, the invisibility: the missing capacity is not detected, because it is compared against the previous cohort (who had it) in a framework that measures outputs (code quality, shipping speed) not capacities (embodied judgment, intuitive architectural sense). The outputs are equivalent or better. The capacity is absent. The absence is invisible.
Le Guin's Anarres faced this with its language. Pravic was designed to prevent possessive thinking (no word for "mine"), and succeeded — the second generation genuinely does not think in terms of property. But Pravic also prevents naming the informal power that Sabul accumulates, the influence that senior figures wield, the hierarchy that emerges in new forms. The tool that liberated (linguistic reform eliminating possessive categories) became the tool that blinded (no vocabulary for the power operating beneath the egalitarian surface). The second generation cannot see what the language does not name, and the language was designed by the first generation to eliminate the old hierarchy, not to detect the new one. The invisibility is built into the liberation's success. The AI parallel: the metrics were designed to track productivity (lines per day, features per sprint), not capacity development (judgment depth, architectural intuition, the ability to feel when something is wrong). The liberation (faster outputs, expanded individual capability) is measured. The cost (missing developmental friction, atrophied capacities) is not. The second generation will lack what the metrics do not track, and the lack will be invisible until a crisis requires the capacity, and the capacity is not there.
The Anarres solution, such as it is, is Shevek: the person who carries first-generation consciousness into the second generation, who remembers (through his partner Takver, through his reading of the founders' writings) that anarchism is a choice not a given, and whose journey to Urras provides the contrast that makes Anarres's walls visible. The AI equivalent would be: practitioners who remember what building felt like before AI, who can code manually if necessary, who have the embodied knowledge that years of friction deposited, and who remain in the profession long enough to transmit not the code (AI does that) but the practice of attending to code in the specific way that builds judgment. These practitioners are a dying generation. They will retire. The question is whether they transmit the practice before they go, or whether they transmit only the outputs, and the outputs are not enough.
The developmental solution is not removing AI tools (that is Luddism, and the Le Guin volume is not Luddite). The solution is deliberate construction of friction-rich practice environments within the AI-augmented workflow: periods where the junior practitioner must debug manually, must wrestle with the blank page, must read cases closely, because the developmental necessity is recognized as such and protected. This is the Bruner volume's scaffolding applied as institutional mandate rather than individual choice: the AI is available, but the learner is required to attempt the problem unaided first, to struggle at the edge of her capability, to experience the productive failure that builds the capacity the AI would have bypassed. The practice becomes more difficult to design (because it resists the default), more expensive to maintain (because it slows the immediate output), and more necessary (because it is the only mechanism that produces the second generation with the capacity to see its own walls).
The concept synthesizes Le Guin's second-generation Anarresti (who cannot see their revolution's calcification) with the developmental psychology of scaffolding withdrawal. Vygotsky's zone of proximal development assumes the scaffold is temporary — support provided, then gradually removed, until the learner can perform independently. But if the scaffold (AI assistance) is never removed, if it becomes permanent infrastructure, the learner never performs independently, never builds the capacity that independent performance would have deposited. The second generation inherits the scaffold as architecture. They do not experience it as support (something provided temporarily to enable growth) but as reality (the structure of the world). They build on it without questioning it. And if the scaffold is later removed (the AI is unavailable, the model's capabilities regress, the service is interrupted), the practitioner discovers she cannot stand without it.
Liberation becomes invisible baseline. What the first generation experiences as achievement (freedom from implementation friction), the second generation experiences as environment (AI has always been available) — the transformation from choice to default is the mechanism that makes walls invisible.
No basis for comparison. The second generation cannot see what is missing (the capacity that debugging friction would have built) because they never had it, never saw others struggle to build it, and therefore have no internal standard against which absence could be measured.
The metrics track outputs, not capacities. Productivity dashboards measure code quality and shipping speed (both maintained or improved by AI) while the architectural intuition that manual debugging deposits is invisible to every metric, making its absence undetectable until a crisis requires it.
Transmission requires embodied carriers. The first generation (who remember pre-AI building) are the only ones who can transmit the practice of attending to systems in the way that builds judgment — they will retire; the transmission window is narrowing.
The scaffold becomes architecture. If AI assistance is never withdrawn, if it becomes permanent, the learner never performs independently, never builds the capacity that independent performance deposits — the support intended as temporary becomes the structure the next generation cannot function without.
Friction-rich practice must be designed. The developmental solution is deliberate construction of environments requiring struggle (manual debugging periods, blank-page assignments, close-reading exercises) within AI-augmented workflows — harder to design, more expensive, and indispensable for producing practitioners with capacity to see beyond the metrics.