Disclosure requirements address the foundational information asymmetry of the AI economy: the companies building the tools possess detailed knowledge of the choice architecture embedded in their products (defaults, optimization targets, engagement mechanisms, behavioral patterns the design is intended to produce), while users possess almost none of this knowledge and have no structural means to acquire it. The specific disclosures that matter in the AI context are: the metrics the tool is optimized to maximize; the default configuration (continuous availability versus structured sessions, with or without comprehension checks); the engagement mechanisms (variable reward schedules, notification triggers, social proof displays); and the data practices (what behavioral data is collected, how it is used, whether it personalizes the interface in ways affecting behavior). Each disclosure enables the user, the deploying institution, and the regulatory body to evaluate whether the tool's design serves user interests.
There is a parallel reading grounded in the political economy of regulatory capture. Disclosure requirements function not primarily as informational tools but as negotiated settlements that formalize corporate power while creating the appearance of oversight. The companies building AI systems possess the technical capacity, the legal resources, and the institutional patience to convert any disclosure mandate into a compliance exercise that reveals nothing substantive. They will disclose optimization targets in language sufficiently abstract to be meaningless ("user satisfaction," "engagement quality"). They will represent engagement mechanisms in technical terms requiring expertise to interpret. They will bury comparative information in formats designed for search cost maximization rather than comprehension. The regulatory body, under-resourced and technically outmatched, will accept these disclosures as meeting the statutory requirement, and the disclosure regime will become what it has become in every prior domain: a symbolic achievement that satisfies the political demand for action while leaving the underlying power structure intact.
The deeper problem is that meaningful disclosure requires revealing competitive intelligence—the specific behavioral mechanisms that constitute the commercial value of the product. No company will voluntarily disclose the variable reward schedules, the notification timing algorithms, the A/B test results showing which interface features maximize daily active usage. The only disclosures that actually get mandated are the ones that reveal nothing a competitor could use. What remains is a disclosure architecture designed not for user protection but for liability management—detailed enough to demonstrate compliance, opaque enough to prevent understanding, formatted to survive judicial review rather than enable informed choice. The political economy produces disclosure theater, not disclosure substance.
Disclosure alone does not change behavior. This is one of the most robust findings in the behavioral literature, and it is the finding that distinguishes the nudge framework from the informational approach that preceded it. Decades of research on financial disclosure, nutritional labeling, and privacy notices consistently demonstrate that information provision by itself produces minimal behavioral effect. People do not read disclosures. When they read them, they do not understand them. When they understand them, they do not act on them, because the gap between information and action is bridged not by knowledge but by the architecture of the choice environment.
The disclosure requirement is therefore foundational rather than culminating. It creates the transparency on which more substantive interventions — default standards, deliberative oversight, ongoing evaluation — depend. Without disclosure, defaults are designed blind. Without disclosure, the deliberative body cannot assess what the commercial architecture is actually doing. Without disclosure, users cannot make informed choices in the contexts where the override matters most. Disclosure is the floor beneath the architecture, not the architecture itself.
The political economy of disclosure is the mechanism's most important feature. Companies building AI tools possess strong financial incentives to resist requirements that would reveal engagement-maximizing features of their interfaces. The countervailing force is public demand for institutional protection, which depends on public understanding of what is at stake. Transparency about the choice architecture of AI tools would, if widely understood, generate the political pressure that sustains the more substantive interventions. The people who learn that the AI tool they use every day was designed to maximize their engagement rather than their well-being become the constituency for the regulatory architecture that protects their interests. The disclosure requirement is the foundation not only informationally but politically.
The technical challenge of meaningful disclosure is substantial. Simple text disclosures in terms-of-service documents produce no behavioral effect. Effective disclosure requires formats designed for comprehension — visual representations of engagement mechanisms, comparative displays of default configurations across competing products, plain-language descriptions of optimization targets. The design of disclosure itself becomes a choice architecture problem, and the quality of disclosure design determines whether the requirement produces informed users or formal compliance.
Disclosure as regulatory tool has a long history in American administrative law, running through securities regulation (1933), truth-in-lending (1968), nutritional labeling (1990), and privacy notices (various). Sunstein's contribution has been to integrate disclosure into the broader behavioral framework and to specify conditions under which disclosure is effective (simplicity, salience, timing) versus merely formal (dense text, buried location, infrequent presentation).
Information asymmetry is foundational. Builders know the choice architecture; users do not; the gap cannot be closed without deliberate disclosure.
Disclosure alone is insufficient. The behavioral literature consistently demonstrates that information provision produces minimal behavioral effect without complementary architectural interventions.
Disclosure enables other interventions. Transparency creates the informational foundation for defaults, deliberation, and sunset review — remove disclosure and the architecture degrades.
Disclosure design matters. Dense text buried in terms-of-service produces formal compliance; visual, comparative, plain-language disclosure produces informed users.
The substantive question is what disclosure requirements can realistically accomplish given the political and technical constraints. On the foundational point—that users cannot evaluate AI tools without transparency about their choice architecture—Edo's position is fully correct (100%). The information asymmetry is structural and cannot be addressed through market mechanisms alone. The contrarian concern about capture is equally valid (80%) as a description of *default institutional behavior*—without sustained political pressure, disclosure regimes degrade into compliance exercises. The synthesis is that disclosure requirements serve as *necessary but insufficient scaffolding* for a broader regulatory architecture, and their effectiveness depends entirely on whether they are designed as foundations for further intervention or as substitutes for it.
The weighting shifts when we consider different disclosure mechanisms. For technical disclosures aimed at deploying institutions and regulatory bodies (rather than end users), Edo's framework is nearly fully applicable (90%)—these audiences have the technical capacity to interpret optimization targets and engagement mechanisms, and mandatory transparency enables informed institutional choices. For consumer-facing disclosures, the contrarian reading dominates (70%)—the history of disclosure failure in consumer contexts is overwhelming, and there is no reason to expect AI disclosures to perform differently without radical innovation in format and enforcement.
The productive frame is *disclosure as political infrastructure*. The point is not that disclosure alone changes behavior—Edo explicitly argues it does not—but that transparency about choice architecture creates the conditions for constituencies to form around substantive protections. When users, institutions, and advocacy groups can see what AI systems are optimized to do, they acquire the informational basis to demand defaults, oversight structures, and evaluation mechanisms that align tools with human interests. Disclosure is the floor, not the ceiling—but the floor must be built to bear weight.