Consider the word game. Board games, card games, ball games, Olympic games, war games, children's games. What is the single property that makes each a game? The classical answer is that there must be one, because the word applies to all. Wittgenstein's response is: look and see. What you find is not a common essence but a network of overlapping similarities, the way members of a family resemble each other without sharing any single feature. Some games involve competition, some do not. Some have rules, some do not. Some entertain, some do not. The resemblances overlap and criss-cross. The concept game has no boundary, no essence, no single definition. And yet we use it successfully.
The concept dismantles the classical theory of meaning that had dominated Western philosophy since Plato — the assumption that every meaningful word must name a property shared by all its instances. If no such property exists for game, the theory fails. And the theory fails for almost every interesting concept examined closely. Beauty. Justice. Knowledge. Understanding. Friendship. None yields a set of necessary and sufficient conditions; each shows the pattern of family resemblances.
The implication for computing is direct. Formal categories require sharp boundaries. A type system either includes an object or excludes it. A conditional branches on a definite truth value. The grammar of family resemblances is not formalizable in this way — there is no list of features, no decision procedure, no algorithm that can replicate the competent speaker's judgment about what counts as a game or a chair or a joke.
This is why symbolic AI collapsed. Expert systems tried to formalize categories that do not have the structure formalization requires. The language model succeeds where symbolic AI failed partly because it does not formalize categories at all. It learns statistical associations across the pattern of use, absorbing the family resemblances without ever representing them as essences. Whether this constitutes understanding or its sophisticated shadow is the open question the Orange Pill Cycle keeps returning to.
The concept also illuminates why the question does AI understand? admits of no clean answer. Understanding itself is a family-resemblance concept. It picks out a network of capacities — applying a word correctly in new cases, responding appropriately to context, following through on what was said — that overlap without reducing to a single criterion. The machine possesses some of these capacities, lacks others, and mimics the rest with a fidelity that the concept's structure makes impossible to evaluate with a single verdict.
Introduced in Philosophical Investigations §§66–67. Wittgenstein arrived at the concept through sustained examination of the word game and extended it explicitly to number, language, and — by implication — to most concepts of philosophical interest.
No shared essence required. A concept can be meaningful without picking out a property common to all its instances.
Overlapping and criss-crossing. The resemblances among instances form a network, not a hierarchy.
Boundaries can be drawn but are not given. One can stipulate a boundary for a purpose; the concept itself does not force one.
Defeats classical analysis. The search for necessary and sufficient conditions fails for most concepts of philosophical interest.
Implications for AI. Concepts like understanding, meaning, and intelligence are themselves family-resemblance concepts; the question whether a machine possesses them admits of no clean yes or no.