Simmel's 1906 essay on secrecy begins with an observation so fundamental its implications are easily missed: all social interaction rests on assumptions that are always, to some degree, false. Complete knowledge of another person is neither possible nor desirable. Every relationship involves a particular configuration of knowledge and ignorance, of revelation and concealment, and this configuration — not the content of what is known or hidden — constitutes the social form. Secrecy, in this framework, is not an aberration. Simmel called it one of the greatest achievements of humanity, because it creates a second world alongside the visible one — a domain of interiority that is the precondition of individual autonomy. AI has introduced a transformation that operates on three levels and inverts the classical relationship between knower and known.
There is a parallel reading that begins not with the phenomenology of secrecy but with the physical infrastructure that makes algorithmic opacity possible. The data centers consuming municipal water supplies, the rare earth mines feeding chip production, the energy grids straining under computational load — these material substrates reveal that algorithmic opacity serves a specific function: it obscures not just mathematical complexity but chains of extraction. The worker whose communications are analyzed by sentiment detection algorithms is also the worker whose labor conditions deteriorate as automation approaches, whose bargaining power diminishes as predictive systems map resistance before it emerges. The opacity is not merely structural; it is strategically maintained.
This reading suggests that Simmel's "second world" of interiority was always a luxury of those who could afford privacy, and AI simply makes this stratification visible. The executive whose decisions remain shielded by corporate privilege experiences AI as a tool of transparency applied downward; the warehouse worker tracked by productivity algorithms experiences it as total visibility with no reciprocal sight. The asymmetry is not new — surveillance has always flowed along gradients of power — but AI perfects it. The algorithm's opacity functions less as an inherent mathematical property than as a legal and economic arrangement: trade secrets protect model architectures, terms of service prevent investigation, computational costs place replication beyond reach. What appears as technological inevitability is revealed as political choice. The systems are opaque because opacity serves those who deploy them, creating a one-way mirror where behavioral surplus flows upward while accountability never flows down.
At the first level, AI creates a new form of transparency — the capacity to detect patterns in behavior, communication, and expression at a scale that renders previously invisible structures visible. The system analyzing communication patterns within an organization identifies informal hierarchies, detects sentiment shifts, and maps workplace dynamics with a precision no human observer could achieve. The erosion of interiority is not experienced as surveillance in the traditional sense; there is no watcher, no observer. There is only a system operating automatically, processing information at scale.
At the second level, AI creates a new and structurally unprecedented form of secrecy: the opacity of the algorithm itself. The systems that render human behavior transparent are themselves profoundly opaque. The large language model operates according to principles inaccessible not merely to the individuals whose lives they affect but, in important respects, to the engineers who built them. This opacity is not the result of deliberate concealment but a structural property arising from the mathematical complexity of neural networks.
Traditional secrecy involves a secret-holder — a person or group possessing information and deliberately withholding it. The secret is something that could, in principle, be disclosed. Algorithmic opacity is categorically different. There is no secret-holder in any meaningful sense. The algorithm does not keep a secret. The algorithm is a secret — a form of opacity intrinsic to the technology rather than produced by social arrangements.
At the third level, the transformation affects the relationship between secrecy and trust. Trust, in Simmel's analysis, depends on a specific configuration: enough knowledge to make confidence reasonable, enough ignorance to make the trust meaningful. Trust resting on complete knowledge is not trust but verification. The employer who deploys AI to analyze employee communications discovers patterns of sentiment and undisclosed dissatisfactions that the employment relationship's tacit norms would have left unexamined. The increase in knowledge does not produce a corresponding increase in trust; it may produce the opposite.
Simmel's "Die Soziologie des Geheimnisses und der geheimen Gesellschaften" appeared in the American Journal of Sociology in 1906 — the first major Simmelian essay to appear in English — and was subsequently incorporated into the 1908 Soziologie.
The framework influenced generations of sociologists of privacy, secrecy, and surveillance. Its application to AI makes visible a power asymmetry that previous surveillance theories addressed only partially: the combination of unprecedented transparency for the subject with unprecedented opacity for the system that produces the transparency.
Configurations of knowledge and ignorance. Every relationship is defined by a specific pattern of what is disclosed and what is withheld; the pattern constitutes the relationship's form.
Secrecy as achievement. The capacity to conceal creates the interiority on which individual autonomy depends; a world without secrecy is a world without inner life.
Algorithmic transparency as pattern detection. AI systems make the individual progressively more visible through inference from micro-behaviors, without any overt surveillance.
Opacity as structural property. Large language models are not deliberately concealed; they are intrinsically opaque, and no one — not even their builders — fully knows what they do.
The inversion of trust. Trust depends on the structured maintenance of a boundary between the known and the unknown; systems that dissolve this boundary erode trust even as they increase knowledge.
The application sharpens a debate within surveillance studies: whether the shift from visible panoptic observation to invisible algorithmic inference represents continuity or rupture. Simmel's framework supports the rupture view — the sociological category of the secret-holder dissolves when the system itself becomes the secret.
The truth about algorithmic opacity depends entirely on which layer of the system we examine. At the mathematical level, Edo's framing is essentially correct (90/10): neural networks with billions of parameters genuinely exceed human interpretability in ways that no amount of disclosure could resolve. The contrarian view gains little purchase here — even with complete access to weights and architectures, the opacity remains. But shift the question to why these particular architectures dominate, and the weighting reverses (20/80): the choice to pursue scale over interpretability, to prioritize performance over explainability, reflects economic and political priorities more than technical necessity.
The question of trust reveals the most balanced tension (50/50). Edo correctly identifies that algorithmic transparency erodes the structured ignorance on which trust depends — the employee whose every keystroke is analyzed cannot maintain the performative boundaries that make workplace relationships function. Yet the contrarian insight is equally valid: this erosion of trust was already underway through managerial surveillance technologies; AI simply accelerates and perfects existing power dynamics. The warehouse worker under algorithmic management loses not new forms of privacy but the last vestiges of autonomy that industrial management had left intact.
The synthetic frame that holds both views might be this: algorithmic opacity is simultaneously a genuine technical phenomenon and a convenient cover for power relations. The mathematical complexity is real, but it becomes the perfect alibi for decisions that were always political. The system that cannot explain itself also cannot be held accountable; the opacity that emerges from architecture becomes indistinguishable from the opacity maintained through law, economics, and organizational structure. Both readings are true because they describe different moments in the same process — the technical becomes political precisely through its claim to be merely technical.