Every previous technology of the intellect restructured cognition through a process the user could observe, practice, and eventually master. The restructuring was transparent in the sense that the operations enabled by the technology were performed by the user through deliberate, learnable practices. The scribe learned to make lists. The programmer learned to code. The user understood what she was doing differently, even if she did not know all the consequences of doing it differently. AI breaks this pattern. The restructuring it produces does not pass through the user's understanding. The user describes a problem in natural language; the machine returns a structured response; the user's thinking has changed. But the mechanism by which the restructuring occurred is opaque. The machine applied processes — pattern matching across billions of tokens, weighted transformations through neural network layers, statistical inferences — that the user cannot inspect, reproduce, or in most cases describe.
This opacity is not a limitation that will be overcome by better design or more transparent models. It is structural. The operations that produce the machine's output are not the kind of operations that human introspection can follow. They are not sequential in the way human reasoning is sequential. They do not proceed through premises to conclusions. They do not employ concepts in the way human thought employs concepts. They produce outputs that are often useful, sometimes strikingly insightful, and occasionally spectacularly wrong — and the path from input to output is, in a fundamental sense, inaccessible to the human user.
Consider the difference through the example of a junior software engineer encountering a problem. In the pre-AI environment, she works through it. She reads documentation, tries an approach, encounters an error, hypothesizes about causes, tries a different approach. The process is slow and frustrating. At the end of several hours, she has a working solution and an understanding of the problem that goes beyond the solution itself. She has deposited, in Segal's geological metaphor, another layer of comprehension. With AI, she describes the problem and receives a working solution. She may examine it and learn from it. But the specific cognitive labor of working through the problem — the false starts, the hypotheses, the gradual narrowing of possibility — has been bypassed. The solution arrived without the process that would have built the understanding.
The opacity transforms the user's relationship to knowledge itself. Writing separated knowledge from the knower, creating a category of knowledge that existed independently of any individual mind — knowledge that could be consulted, that was authoritative regardless of who held it. The literate person understood the mechanism: the knowledge was in the text, the text was external, one consulted the text. AI produces a different kind of knowledge — knowledge without provenance, produced by a process the user cannot inspect, synthesized from patterns that were learned during training and no longer correspond to identifiable sources. The user's relationship to this knowledge has no clear parallel in the history of technologies of the intellect.
Opacity is not inherently destructive. Human beings have always depended on cognitive processes they do not fully understand — intuition, aesthetic judgment, the felt sense that a solution is right before it can be explained. But these processes are calibrated by experience. The experienced surgeon's felt sense that something is wrong is opaque, but it has been calibrated by thousands of hours of practice. The AI system's output is opaque in a different way: calibrated by training data the user cannot survey, through processes the user cannot evaluate. The trustworthiness of the output is a matter of statistical reliability rather than experiential calibration, and evaluating the restructuring AI produces requires a new kind of literacy — the capacity to assess the products of an opaque process.
The concept extends Goody's framework into a feature of AI he did not live to analyze. Previous technologies of the intellect all had transparent mechanisms by comparison with the neural network's internal operations, even when the full consequences of the mechanism were not understood. The novel feature of AI is that the mechanism itself is beyond user inspection.
The argument draws on discussions of AI interpretability in the technical literature and on philosophical work on cognitive opacity by Paul Humphreys and others.
Transparent vs. opaque restructuring. Previous technologies restructured through mechanisms users could practice and master; AI restructures through mechanisms users cannot inspect.
Product without process. The restructured cognition arrives as output; the user does not undergo the process that would produce the same restructuring internally.
Knowledge without provenance. AI-mediated knowledge cannot be traced to identifiable sources in the way textual knowledge can.
Statistical vs. experiential calibration. AI's opacity is calibrated by training data, not by the user's accumulated experience.
New literacy required. Evaluating cognitive restructuring one did not perform demands capacities without precedent in literate tradition.
Whether AI opacity is manageable through developing appropriate critical practices or whether it is corrosive in ways that cannot be addressed by user-side literacy is the central empirical question. Advocates of interpretability research argue that the opacity is being reduced through technical advances. Skeptics note that even if individual model behaviors become more interpretable, the user's practical experience of using AI remains one of receiving products without undergoing processes.