The hidden index is a concept extracted from Blair's framework to name a distinctive feature of AI-era reference technology: the organizational principles that shape every output are real and consequential but invisible to the user. In previous reference technologies — Diderot's Encyclopédie, the library catalog, even the early search engine — the organizational scheme was visible, at least in principle, and could be evaluated by the user. Large language models distribute their organizational knowledge across billions of numerical parameters whose individual values have no human-interpretable meaning. The user interacts with the outputs without access to the structure that produced them. She cannot evaluate whether the model's connections between concepts reflect genuine intellectual relationships or merely statistical co-occurrence. The index is hidden.
The practical consequences are significant. A reader of Diderot's Encyclopédie could follow a cross-reference and ask: does this connection illuminate, or is it an editorial error? The connection was visible, and the judgment could be applied directly to it. A user of a language model receives an output that embodies thousands of connections, none of which are explicit. She can evaluate the output — is this assertion correct? — but not the organizational scheme that produced it, because the scheme is not articulated.
The hidden index creates a distinctive evaluative challenge. Because the reader cannot inspect the structure, she cannot predict where the model's knowledge is deep and where it is shallow. She cannot determine whether the emphasis on certain aspects of a topic reflects the topic's actual structure or biases in the training corpus. She cannot assess coverage — whether the model's knowledge of a given domain is comprehensive or patchy — because she has no map of what the model knows and does not know.
The inferential workaround is what Blair's framework calls the experiential map: an informal, practice-based understanding of where the model excels and where it fails, built through sustained interaction with specific systems. Renaissance scholars developed analogous experiential maps of their reference libraries, learning over time which entries in which works were reliable. The contemporary parallel requires the same kind of patient accumulation of experience — and cannot be shortcut by documentation, because the relevant knowledge is not documented.
The hidden index is related to but distinct from the general interpretability problem in AI research. The interpretability program aims to open the black box technically; the hidden index concept names the evaluative consequences of the box being closed for the user, regardless of what researchers eventually recover from it. Until interpretability matures, the user must cope with opacity — and the coping requires the practices Blair's framework identifies.
The term is an extension of Blair's historical work on the relationship between organizational scheme and user evaluation in reference works. It does not appear in her published writing under this label but captures a distinction her framework makes and that AI practice has made operationally central.
Organization is real but invisible. The model's internal structure shapes every output but cannot be directly inspected.
Evaluation must be output-based. The user can critique what the model produces but not the scheme by which it produces.
Experiential map substitutes for structural access. Sustained practice with a specific system produces an informal understanding of its strengths and weaknesses.
Coverage is undiscoverable. The user cannot know in advance where the model's knowledge is patchy; gaps reveal themselves only in use.
Distinct from interpretability. Even a future technical solution to model interpretability would not fully resolve the user's inferential burden.