Heuristic Search — Orange Pill Wiki
CONCEPT

Heuristic Search

Simon and Newell's 1972 framework for how bounded minds navigate problem spaces too large for exhaustive search — using rules of thumb to direct attention toward promising alternatives, the cognitive engine of every expert performance and the specific capability AI most powerfully augments.

Heuristic search is the central operation in Simon and Newell's theory of problem-solving: the use of rules of thumb to direct a bounded agent's navigation through a problem space too large to search exhaustively. The chess grandmaster does not evaluate every legal move; she evaluates four or five, selected by pattern-recognition heuristics built from years of practice. The software architect does not consider every possible system design; she considers a handful, filtered by heuristics about what has worked in similar domains. The heuristic does not guarantee optimality — it directs attention toward regions of the problem space where good solutions are most likely to be found, but cannot promise that the best solution will be discovered. The framework applies directly to AI-augmented work, which provides extraordinary implementation heuristics (the tool's pattern libraries guide the builder to competent solutions) while leaving goal heuristics — the capacity to specify what is worth pursuing — bounded by the builder's domain expertise and judgment. The asymmetric augmentation is the structural reason AI produces builders who generate expertly but may not be able to evaluate expertly.

In the AI Story

Hedcut illustration for Heuristic Search
Heuristic Search

The distinction between search heuristics and goal heuristics is fundamental to understanding what AI changes and what it does not. Search heuristics guide navigation through a problem space once the goal is specified — they are the pattern libraries that identify promising paths toward a known destination. Goal heuristics guide the specification of what the destination should be — they integrate multiple considerations into a coherent purpose. The two operate on different cognitive substrates: search heuristics are pattern-based and augment well with AI; goal heuristics are integrative and resist augmentation because they require the kind of tacit judgment that bounded minds build through experience rather than retrieve from training data.

The framework reveals why expertise matters more at the goal-setting layer than at the implementation layer in AI-augmented work. The novice builder with a well-specified goal can reach competent implementations through the AI's heuristics. The expert builder with a poorly specified goal cannot be saved by the AI's heuristics, because the implementations will be competent solutions to the wrong problem. The binding constraint on output quality has shifted from implementation competence (where AI provides abundant support) to goal quality (where AI provides little and can actively mislead through its confident presentation of alternatives within the space it has already implicitly bounded).

The framework also illuminates the nature of expertise. Simon and Newell's research demonstrated that experts and novices search the same distance ahead through problem spaces — the grandmaster looks roughly as many moves deep as the serious amateur. The expert's advantage is in pattern recognition, not search depth: the expert's heuristics select better starting points for the search because her pattern library identifies promising regions before conscious deliberation begins. This insight has direct implications for the AI age: the builder whose pattern library is thin cannot evaluate whether the AI's heuristics have selected good starting points, and must trust the tool at precisely the layer where trust is most dangerous.

Origin

Simon and Newell developed the framework through a decade of research on chess, cryptarithmetic, and other structured problem domains. The 1972 Human Problem Solving synthesized the research into a comprehensive theory. Subsequent research — particularly Chase and Simon's 1973 work on chess perception — established the empirical foundation for the framework's claims about expertise and pattern recognition.

The framework has been extended and modified by subsequent cognitive science research. Gary Klein's work on naturalistic decision-making extended it into real-time professional practice. Daniel Kahneman's work on heuristics and biases cataloged the systematic errors that heuristic search produces. John Anderson's ACT-R architecture formalized the framework into a cognitive model that continues to generate testable predictions. The framework's durability across five decades reflects both its descriptive accuracy and its theoretical productivity.

Key Ideas

Exhaustive search is impossible. Real problem spaces are too large for bounded agents to search exhaustively, making heuristic guidance essential.

Heuristics direct attention. Rules of thumb select starting points and prune branches, reducing the search space to dimensions bounded minds can navigate.

Expertise is pattern recognition. Experts do not search deeper than novices; they search better, because their pattern libraries identify more promising starting points.

Search heuristics differ from goal heuristics. Navigation through a problem space requires different cognitive operations than specifying what the goal of the navigation should be.

AI augments search more than goals. The tool's pattern libraries guide implementation effectively; specifying what is worth implementing remains bounded by the builder's domain expertise.

Appears in the Orange Pill Cycle

Further reading

  1. Newell and Simon, Human Problem Solving (1972)
  2. Chase and Simon, 'Perception in Chess' (1973)
  3. Gary Klein, Sources of Power (1998)
  4. Daniel Kahneman, Thinking, Fast and Slow (2011)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT