The fishbowl condition names the specific form of democratic exclusion that characterizes contemporary AI governance. Affected populations are not ignorant of what is happening — they read policy proposals, watch congressional hearings, follow industry announcements. They are not excluded from information. They are excluded from influence. They exist inside a fishbowl of observership, looking out at governance processes they can see but cannot touch. The condition is not new in democratic governance, but AI's specific features — rapid pace, technical complexity, global scope — intensify its consequences by disabling the intermediary institutions (unions, advocacy organizations, traditional regulatory channels) that historically mitigated it.
The condition extends Segal's fishbowl metaphor from individual cognitive confinement to structural political exclusion. Where Segal's original use described the invisible assumptions within which individuals think, Fung's extension describes the invisible institutional barriers within which populations are governed. Both fishbowls are transparent — the water is clear, the glass is invisible — but both are impenetrable from inside.
AI governance in 2026 exhibits the condition in multiple forms. Corporate governance of AI is conducted by teams whose expertise is genuine but whose institutional position constrains their perspective toward metrics reflecting company interests. Regulatory governance is nominally open through comment periods but consultative rather than consequential. Academic governance is mediated through publications and advisory boards operating within academic incentive structures that may not align with affected populations' interests.
The condition is compounded by what functions as a legitimation mechanism: the use of expert authority to justify outcomes serving particular interests while presenting themselves as technically necessary. When a technology company convenes an AI ethics advisory board staffed with prominent academics, the board serves a legitimation function regardless of analytical quality — its existence communicates responsible governance even when recommendations are advisory, composition excludes affected populations, and corporate authority remains unilateral.
The recursive dimension of the condition — that AI now enables simulation of the inclusion it structurally denies — is the feature distinguishing it from historical analogues. Public comment systems designed to channel citizen voice are vulnerable to AI-generated synthetic comments that simulate broad participation. The fishbowl is not merely thick; it is being equipped with technology that projects false images of the populations inside it out to the decision-makers looking in, further distorting governance.
The concept emerged from Fung's application of democratic theory to the specific institutional landscape of AI governance. The observation that affected populations had access to information about AI decisions but no influence over them required a framework distinct from simple exclusion — a framework capable of explaining why visible and informed populations still lacked governance power.
The metaphor's extension from Segal's cognitive use to Fung's institutional use was deliberate and reciprocal: the cross-pollination between The Orange Pill and Fung's analysis represents the integration of individual and structural analyses of AI's democratic implications.
Observation without influence is the specific form of exclusion. The fishbowl condition is not about information access but about governance power, and AI governance systematically decouples the two.
Intermediary institutions are disabled. Unions, advocacy organizations, and traditional regulatory channels that historically mitigated similar conditions in other domains are underdeveloped for AI.
Legitimation mechanisms compound exclusion. Advisory boards and consultation processes create the appearance of democratic governance without its substance, inoculating institutions against demand for genuine inclusion.
AI enables simulation of the inclusion it denies. Synthetic comments and AI-generated advocacy can fake participatory legitimacy, further distorting governance.