The technocratic paradox is the productive tension between Webb's two deepest commitments: her trust in the trained professional administrator as the agent of social reform, and her commitment to industrial and political democracy as the only legitimate basis for governance. Webb did not trust unregulated markets to allocate resources justly and did not, in her more candid moments, entirely trust electorates to choose wisely. What she trusted was the professional administrator — the person educated in social investigation, trained in public policy, and disciplined by civil-service norms. She simultaneously championed trade unions, collective bargaining, and the principle that workers should have a voice. The tension was never resolved in her lifetime and has returned in acute form in contemporary debates over who should govern AI.
The Webbs established the London School of Economics in 1895 not as academic philanthropy but as an instrument of social engineering — a factory for the production of the trained minds that would staff the reformed state. 'The issues of capitalism,' Webb argued, would be resolved not by workers seizing the means of production but by 'professional experts' — a highly trained elite of administrators and specialists who would organise the socialist society through rational investigation and systematic policy.
The question maps onto the central governance question of the AI transition with uncanny precision: should AI be governed by the people who understand it or by the people who are affected by it? The technology sector's implicit answer has overwhelmingly been the former. AI governance is conducted by a narrow priesthood — the researchers who build the systems, the executives who deploy them, the policy specialists who draft regulations, the safety teams who evaluate risks. Affected populations are consulted, if at all, through mechanisms that are advisory rather than authoritative.
The Fabian model of governance — expert design, democratic legitimation, professional administration — produced institutions of extraordinary durability: the National Health Service, national insurance, the apparatus of labour regulation. It also produced systematic limitations. Expert-designed systems tend to serve populations that the experts understand, which are typically populations resembling the experts themselves. The welfare state the Webbs designed systematically disadvantaged women, minorities, and workers in non-standard employment — precisely the populations whose experience the experts had not investigated with sufficient care.
The resolution lies not in choosing between expertise and democracy but in designing institutions that embody both — in which expert knowledge informs democratic deliberation, and democratic deliberation constrains expert authority. The trade boards Webb helped design were an early attempt at this synthesis: expert-designed institutions that included representatives of affected populations in their governance. The AI moment requires a more ambitious version — governance bodies that include workers, communities, and citizens alongside technology specialists, with authority that is binding rather than merely advisory.
The tension was present throughout Webb's career but became most visible in the disagreements within the early Fabian Society over the proper role of expertise versus mass democratic action, and in her later, more troubled enthusiasm for Soviet administrative planning — an enthusiasm the Webbs' 1935 Soviet Communism: A New Civilisation? represented and that subsequent history rendered deeply problematic.
Expertise without democracy serves experts. Governance by specialists tends to reproduce the blind spots and interests of the specialist class.
Democracy without expertise is overwhelmed. Complex technological systems cannot be governed by deliberation alone; the claim that they can is as naïve as the claim that markets self-correct.
The synthesis requires binding participation. Advisory inclusion is insufficient; affected populations must have genuine authority over decisions that shape their lives.
Four design principles. Transparency, participation, accountability, and subsidiarity are the conditions under which expert and democratic authority can operate together.
The contemporary AI-governance debate reproduces the Fabian tension with high fidelity. Technical voices argue that complex systems require specialist regulation; participatory-democracy scholars argue that affected populations must have binding authority. The Webbian answer — both, institutionally integrated — remains the harder and more productive path.