Predictive Processing — Orange Pill Wiki
CONCEPT

Predictive Processing

Clark's framework in which the brain is fundamentally a prediction machine — generating top-down expectations about sensory input and learning from the errors when predictions miss.

Predictive processing is Andy Clark's theory of biological cognition, developed most fully in Surfing Uncertainty (2015) and The Experience Machine (2023). Its central claim is that the brain does not passively receive sensory information and then figure out what to do with it. Instead, the brain constantly generates top-down predictions about what sensory signals should look like, compares those predictions to actual incoming signals, and learns from the discrepancies — the prediction errors — that result. Perception is not interpretation. It is the brain's best prediction, updated by error when expectations fail. The framework turns out to be eerily relevant to AI, not because Clark designed it that way but because biological brains and large language models appear to share a deep computational principle.

In the AI Story

Hedcut illustration for Predictive Processing
Predictive Processing

The framework draws on a tradition going back to Hermann von Helmholtz in the nineteenth century, who first proposed that perception is unconscious inference. Twentieth-century computational neuroscientists — Karl Friston most notably — developed the mathematical formalism that made predictive processing tractable. Clark's contribution has been to show how profoundly the framework reshapes the understanding of perception, action, attention, emotion, and consciousness itself. On this account, what you see is not the world but the brain's best guess about what the world should be.

The parallel to AI is striking. Both biological brains and large language models are, at bottom, engines that learn to predict — the brain predicts sensory inputs, the model predicts the next token. Both learn by exposure to vast amounts of data. Both develop internal representations that capture statistical regularities. Both generate outputs shaped by learned expectations about what should come next. The structural resonance may explain the phenomenology of AI collaboration — the sense of being met by the system, of participating in a cognitive process that feels more like conversation than tool use.

But the parallel has a critical asymmetry. The brain's generative model is constrained by what Clark calls biologically critical goals — knowing how things are and how they will change if the organism acts. This goal-directedness tethers prediction to reality. When the brain's predictions are wildly wrong, the errors are large, the model updates, and behavior adjusts. Large language models lack this tethering. They predict tokens based on statistical patterns; they do not act on the world, do not receive feedback about consequences, do not have bodies that would be harmed if predictions were wrong.

The consequence is a characteristic failure mode. When a language model produces confident, fluent, detailed nonsense — a hallucination — it is doing what a disembodied generative model should be expected to do. Generating outputs statistically consistent with training data without any mechanism for checking whether those outputs correspond to reality. The brain's generative model hallucinates too — dreams, illusions, phantom limbs — but its hallucinations are constrained by embodied interaction with a world that pushes back. The model's hallucinations are constrained only by linguistic patterns, which are correlated with reality but not identical to it.

Origin

Clark encountered predictive processing through the work of Friston and others in the early 2000s and spent a decade integrating it with his earlier work on extended cognition and embodiment. Surfing Uncertainty was his first systematic statement of the synthesis. The Experience Machine extended the framework to everyday experience — why placebos work, why anxiety shapes perception, why the world looks the way it does to the creature predicting it.

The application to AI emerged gradually and then forcefully, culminating in Clark's 2024 TIME essay "What Generative AI Reveals About the Human Mind" and his 2025 Nature Communications paper. The predictive processing framework turns out to provide the theoretical foundation for why the human component of extended cognitive systems is not merely important but architecturally necessary.

Key Ideas

The brain predicts. Perception is not reception of data but the generation of expectations, continuously updated by prediction error.

Biological and artificial generative models converge. Both brains and LLMs are prediction engines shaped by exposure to statistical patterns in training data.

Embodiment is the tether. The brain's predictions are constrained by action in the world; the model's predictions are constrained only by language.

Hallucination is structural. A generative model without embodied grounding cannot distinguish its best outputs from its worst; both feel equally confident.

The human provides the check. In extended cognitive systems, the biological component's distinctive contribution is the embodied, goal-directed evaluation that keeps generation honest.

Appears in the Orange Pill Cycle

Further reading

  1. Andy Clark, Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2015)
  2. Andy Clark, The Experience Machine: How Our Minds Predict and Shape Reality (Pantheon, 2023)
  3. Andy Clark, "What Generative AI Reveals About the Human Mind," TIME (2024)
  4. Karl Friston, "The Free-Energy Principle," Nature Reviews Neuroscience 11 (2010)
  5. Jakob Hohwy, The Predictive Mind (Oxford University Press, 2013)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT