Verification Literacy — Orange Pill Wiki
CONCEPT

Verification Literacy

The ability to read and evaluate code — to trace its logic, identify its assumptions, and determine where it will fail — even if one cannot write it. The specific form of competence the AI era requires and few curricula teach.

Verification literacy is the form of programming competence that the AI era has made structurally necessary. It is not the ability to write code, which AI tools have increasingly made optional. It is the ability to read and evaluate code — to trace its logic, identify its implicit assumptions, and determine the conditions under which it will fail. The distinction matters because the two skills, while related, are not identical. A builder who cannot write code may nonetheless learn to read it critically, and the reading skill is precisely what makes the difference between deploying AI output responsibly and deploying it as a bet. The concept is implicit throughout Dijkstra's writing — particularly the 1988 essay on teaching computing — and has become explicit in the pedagogical literature that has emerged in response to the ubiquity of AI code generation.

In the AI Story

Hedcut illustration for Verification Literacy
Verification Literacy

The split between writing and reading skills exists in other literacy domains. A reader of French may not be a writer of French; the competence to evaluate a text in a language is separable from the competence to produce one. The same distinction is available for code, and it has not been widely acknowledged because, until recently, the dominant route to reading competence was through writing competence — you learned to read code by writing it, and the reverse was rarely attempted.

AI-generated code disrupts this route. Builders now produce code they did not write and often cannot read. The generation capability has raced ahead of the reading capability, and the gap between them is where errors live. The 2024 Purdue finding that ChatGPT was wrong fifty-two percent of the time while users preferred its responses for their fluency is a direct consequence of this gap: the builders could not read the output well enough to evaluate it, and the fluency of the presentation substituted for substantive assessment.

Verification literacy is the specific competence that closes this gap. It does not require the full skill set of a programmer. It requires the ability to look at a piece of generated code and ask: what does this assume? where are its edge cases? what inputs would cause it to fail? under what conditions would it misbehave in ways the surface does not reveal? These questions can be asked by someone who cannot write the code being evaluated, provided she has been taught to ask them and has had practice in answering them.

The pedagogical implications are directly opposed to the default trajectory of programming education in the AI era. The default is to reduce programming instruction on the grounds that AI makes it unnecessary. The Dijkstrian response is that AI makes a specific kind of programming literacy more necessary, not less — the literacy that reads and evaluates rather than the one that writes. A curriculum that teaches prompting without verification literacy produces, in Dijkstra's terms, fluent illiterates: people who can generate text they cannot read.

Origin

The term verification literacy has appeared in computing education research from the mid-2020s onward, often in direct response to the deployment of AI-assisted coding tools in classrooms. The concept is older — it is implicit in Dijkstra's lifelong argument that programming is reasoning about code, not merely producing it — but the operational formulation as a distinct literacy skill belongs to the current era.

Related concepts include the "critical reading of code" tradition in software engineering pedagogy and the literature on "code review as learning" that emerged around collaborative development practices. Verification literacy synthesizes these with the specific demands of evaluating AI-generated output.

Key Ideas

Reading is separable from writing. The skill of evaluating code is related to but distinct from the skill of producing it. One can be learned without the other, and the AI era makes the reading skill primary.

Fluency is not comprehension. AI output is fluent by design. Fluency can conceal errors that reading competence exposes.

The gap is where errors live. When generation outpaces reading, the untested assumptions survive to production. Verification literacy is the specific competence that narrows the gap.

Curricula are moving in the wrong direction. Reducing programming instruction in response to AI leaves builders with generation capability and no way to evaluate it. The correct direction is more instruction in reading and evaluation, even as writing instruction shrinks.

Questions are the method. Verification literacy is taught by teaching the questions: what does this assume? where are its edge cases? under what conditions does it fail? These questions can be asked by someone who cannot produce the code, and the asking is the skill.

Debates & Critiques

Whether verification literacy can be taught at scale, and whether it can be taught without the foundation of writing competence that historically produced reading competence, is an open question in computing pedagogy. The optimistic position holds that dedicated curricula can develop the skill directly; the pessimistic position holds that reading without writing produces a shallow competence that fails when encountering unusual or subtly flawed code. The evidence will come in over the next decade as the first generation of AI-native developers enters the workforce.

Appears in the Orange Pill Cycle

Further reading

  1. Edsger W. Dijkstra, "On the Cruelty of Really Teaching Computing Science" (EWD1036, 1988)
  2. Felienne Hermans, The Programmer's Brain (Manning, 2021)
  3. Mark Guzdial, Learner-Centered Design of Computing Education (Morgan & Claypool, 2015)
  4. Shriram Krishnamurthi, "Teaching Programming Languages in a Post-Linnaean Age" (SIGPLAN, 2008)
  5. Peter Denning and Matti Tedre, Computational Thinking (MIT Press, 2019)
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT