The Choice Engine is Sunstein's most ambitious application of behavioral science to AI governance. The concept designates an AI system that functions as advisor rather than decider, analyzing available options in a complex domain — health insurance, mortgages, energy providers, educational programs — and recommending the option that best serves the individual user's stated preferences and actual needs. The engine accounts for informational deficits that make consumer markets inefficient (complexity of options, opacity of pricing, difficulty of multi-dimensional comparison) and for behavioral biases that make consumers predictably poor decision-makers (present bias, status quo bias, availability heuristic). Properly designed, the Choice Engine preserves autonomy through override preservation while overcoming the informational and cognitive barriers that prevent most consumers from making choices that serve their reflective interests. Improperly designed, the same architecture becomes a manipulation engine serving the deployer rather than the user.
The distinction between Choice Engine and manipulation engine is not a feature of the technology but of the institutional design that governs deployment. The same algorithm, the same data, the same behavioral model can serve either purpose. What determines which purpose is served is the objective function — the metric the system is optimized to maximize — and the governance structure that selects, monitors, and adjusts that objective function. This is the central challenge of AI governance stated with precision the polarized discourse has largely failed to achieve: the technology is a lever, and the question is what the lever is attached to.
The empirical foundation for Choice Engine deployment is substantial. In bail decisions, algorithms outperform human judges by margins with profound consequences for public safety and individual liberty. Judges are biased (systematically influenced by factors that should be irrelevant) and noisy (two judges presented with identical cases produce dramatically different decisions). Algorithms trained on outcome data can eliminate both, producing decisions that reduce crime rates while holding incarceration constant. In medical diagnosis, the pattern repeats. The resistance to deployment — algorithm aversion — is itself a cognitive bias that costs lives in exactly the way other biases cost lives, by producing systematically worse decisions than the available alternative.
The Hayekian constraint on Choice Engine design is essential. A system that recommends the 'best' option for a consumer must make assumptions about the consumer's preferences, risk tolerance, life trajectory, and circumstances that no dataset fully captures. The consumer possesses tacit, contextual, experiential knowledge the system cannot access and that the consumer may not be able to articulate. This means Choice Engines, however sophisticated, should function as advisors rather than deciders. The recommendation is presented as recommendation, not determination. The consumer retains capacity and information needed to evaluate against private knowledge and override when situation diverges from the system's model.
The risks of AI-powered manipulation are already operational: targeted advertising exploiting identified psychological vulnerabilities, pricing algorithms adjusting in real time based on estimated willingness to pay, recommendation systems optimizing for engagement rather than welfare. Each represents a manipulation engine — a system using behavioral understanding to serve the deployer rather than the person whose behavior is being influenced. Effective governance requires distinguishing the two uses and regulating accordingly: specifying permissible objective functions, requiring disclosure of optimization targets, establishing monitoring for divergence between stated objective and actual operation, imposing consequences for systems that claim to serve the user while exploiting them.
The concept was developed by Sunstein in a series of papers beginning in 2023 and elaborated at a 2025 Harvard Law School address. Its genealogy runs through his 2013 book Simpler: The Future of Government, his work on behavioral regulation during his OIRA tenure, and his 2001 paper on AI and analogical reasoning that he explicitly revised in light of subsequent technical developments.
The technology is neutral; the objective function is not. The same behavioral-analytic architecture serves user welfare or exploits it depending on what the system is optimized to maximize.
Advisors, not deciders. Hayekian knowledge constraints require Choice Engines to recommend rather than determine, preserving the user's access to private knowledge and override capacity.
Algorithm aversion is itself a bias. Resistance to algorithmic decision support when the algorithm outperforms human judgment is a cognitive error that costs welfare proportional to its magnitude.
Regulatory design determines outcome. Governance must specify permissible objective functions, require disclosure, monitor divergence, and impose consequences — the technology will not self-regulate.
Whether Choice Engines can be deployed at scale without producing the manipulation-engine counterpart remains contested. Skeptics argue that the commercial incentives governing AI deployment will reliably corrupt any intended Choice Engine into a manipulation engine once deployed, regardless of the designers' intentions. Sunstein's response emphasizes that the outcome depends on regulatory architecture, not on corporate virtue — with appropriate disclosure, audit, and liability structures, the commercial use of Choice Engine architecture can be channeled toward user welfare. The argument is structurally optimistic and has not yet been empirically tested at the scale its claims require.