Coyle's most uncomfortable conclusion, implied throughout her measurement work but rarely stated bluntly, is that measurement systems are not neutral recording devices. They are incentive structures. What they measure, they reward. What they cannot measure, they penalize by neglect. A measurement system that counts output without assessing quality does not merely fail to capture quality — it actively discourages quality, because quality requires the kind of investment that the output metric counts as cost rather than value. What you measure shapes what you value. What you cannot measure disappears from the conversation. This is the political claim that gives measurement reform its urgency: the AI transition will be governed by the metrics available to the people governing it, and if those metrics show only the boom, the governance will be designed for a boom.
The principle operates institutionally, not individually. No economist or policymaker consciously decides to ignore what the metrics cannot show. The omission is structural: when decisions are made in committees, assessed against performance targets, reported in quarterly briefings, and contested in political debates, the information that enters the deliberation is the information the measurement infrastructure supplies. Information outside that infrastructure does not enter the room.
The principle has specific consequences for the AI transition. Governments that see only productivity growth will respond with policies that maximize productivity — deregulation, acceleration, removal of institutional friction. Governments that could also see human capital depletion, quality erosion, and wellbeing decline might respond differently. They might build the structures Segal calls dams — not to stop the river, but to channel it toward life.
Coyle's career has been organized around this claim. Her decades of advocacy for measurement reform — through academic scholarship, institutional advisory work on the Bean Review and BBC Trust, leadership at the Bennett Institute — reflect the conviction that measurement is not a technical matter but a prerequisite for democratic governance of technological transitions.
For the AI-revolution reader, the principle reframes the entire project of this book. Building better measurement instruments is not an academic luxury. It is a governance imperative. The structures that determine whether the AI transition produces broad flourishing or concentrated extraction are being made now, on the basis of metrics that cannot distinguish between the two outcomes. A policy apparatus guided by metrics that celebrate what they can see and neglect what they cannot will produce policies that amplify what is visible and neglect what is invisible — which, in the AI transition, is where the most consequential effects live.
The principle has antecedents in the measurement-policy literature going back to Kuznets's original warnings. Coyle's articulation develops through GDP (2014), Cogs and Monsters (2021), and most explicitly in The Measure of Progress (2025). It draws conceptually on James Scott's Seeing Like a State and the broader tradition of institutional analysis of quantification.
Metrics as incentives. Measurement systems do not passively record reality; they shape the reality they purport to describe by determining what counts as valuable.
Structural neglect. What cannot be measured is penalized institutionally, not intentionally — a pattern that produces outcomes indistinguishable from deliberate disregard.
Political stakes. The measurement infrastructure determines what governance can address; reforming it is itself a political project.
The AI governance consequence. Current metrics will produce governance designed for what they show, which means the unmeasured dimensions become ungoverned.