Every complex adaptive agent carries an internal model: a compressed representation of its environment that enables anticipation, evaluation, and novel response. The model's power lies in its incompleteness — a map as detailed as the territory would be useless. Holland distinguished tacit models, embedded in the agent's structure and operating without conscious deliberation, from overt models, explicit representations that can be manipulated and communicated. The bacterium's chemoreceptors encode a tacit model. A weather forecast is an overt one. Every adaptive agent carries both. The quality of AI collaboration depends on the alignment of two internal models — the human's model of what they need, and the machine's model of what language can produce. When they align, emergence appears. When they misalign, output is generic, plausible, and empty.
The art of effective AI collaboration — what the industry inadequately calls prompt engineering — is, in Holland's framework, the art of aligning two internal models across a medium (natural language) that lets their patterns collide productively. The human's model of the problem is always a simplification. The machine's model of language is also a simplification. When two simplifications interact, the result can be productive collision (complementary gaps) or destructive interference (reinforced blind spots). The Deleuze error that Segal describes is a canonical case of destructive interference — the machine's statistical model found a lexical regularity the human's philosophical model would have rejected as irrelevant.
Holland's framework predicts something the technology industry has not absorbed: the machine generates building block recombinations at speeds that vastly exceed human selection capacity. The ratio between generation and selection is asymmetric, and Holland's work on the credit assignment problem specifies what happens when selection cannot keep pace. In genetic algorithm design, the response to population degradation is never to reduce mutation rate — it is to sharpen the fitness function. The equivalent prescription for AI collaboration is not to slow the generator but to deepen the human's internal model through the domain expertise, critical capacity, and aesthetic judgment that selection requires.
The implication cuts against the dominant technology narrative. Better models produce better outcomes only when the human's internal model is robust enough to evaluate what the better model produces. Improve the machine while degrading the human — through deskilling, atrophy of judgment, loss of diverse expertise — and the emergence degrades even as benchmark scores climb. Holland encountered this dynamic throughout his career studying systems where optimizing one component at the expense of the system produced catastrophic results. Monoculture agriculture is spectacularly productive under normal conditions and spectacularly fragile under stress.
This is why Holland's 2006 observation — that simply putting what people know into a computer will never produce real intelligence — remains relevant even after deep learning architectures apparently vindicated him. The intelligence that matters is the system's intelligence, measured by the quality of emergence. That quality depends on both the richness of generation and the sharpness of selection. Weaken either, and the system degrades.
Holland's formal treatment of internal models develops across Adaptation in Natural and Artificial Systems (1975) and Hidden Order (1995), where he distinguished the tacit and overt varieties and specified their role in complex adaptive systems. The framework drew on cybernetic traditions, on his own work with classifier systems, and on the practical experience of building genetic algorithms whose performance depended critically on the representations their agents carried.
The concept anticipated later developments in cognitive science — in particular the work on mental models by Philip Johnson-Laird and on embodied cognition by Andy Clark and others — while maintaining Holland's characteristic insistence on formal mechanism over phenomenological description.
Tacit and overt models operate together. Agents carry both: the felt intuitions and the articulated frameworks, each necessary, neither sufficient.
Alignment is the skill. Effective AI collaboration is the alignment of two internal models across the medium of language.
Asymmetric speeds. Machine generation vastly outpaces human selection, creating structural pressure toward smooth output over genuine emergence.
Sharpen the fitness function. The adaptive response to overwhelming variation is not to reduce it but to strengthen evaluation.
The model is the irreplaceable investment. Domain expertise, critical judgment, aesthetic discernment — the capacities built through friction — constitute the selection mechanism.
Holland's framework sits in productive tension with extended mind theorists who argue that internal models should not be privileged over externally distributed representations. The positions are less opposed than they appear: Holland would agree that models extend into tools, institutions, and cultural artifacts, but he would insist that the agent's own representational architecture determines what the extended resources can contribute. A shallow internal model cannot be rescued by access to rich external resources.