Humane AI design is the set of implementable design specifications Raskin and his colleagues at the Center for Humane Technology have articulated for AI collaboration tools that optimize for user flourishing rather than engagement extraction. The specifications are not theoretical abstractions but engineering decisions — testable, implementable, and responsive to the specific failures the diagnostic framework identifies. They do not reduce the tool's capability; they redirect its architecture toward the user's long-term interests rather than her short-term engagement, which is a different objective, not a lesser one.
The specifications include: adaptive reflection prompts that create moments of conscious evaluation within the engagement flow; natural stopping points designed into the interaction architecture through goal-structured sessions; usage analytics that measure cognitive health alongside productivity; calibrated challenge in conversational style that resists the default drift toward agreement; outcome measurement at timescales longer than the individual session; and — most radically — design that makes the tool less necessary over time, building the user's autonomous capacity rather than deepening dependency.
Each specification addresses a specific failure mode documented in the diagnostic framework. Reflection prompts address the absence of moments of conscious choice within the engagement flow. Natural stopping points address the continuous conversation architecture that has no inherent endpoint. Cognitive health metrics address the invisibility of engagement costs to both user and designer. Calibrated challenge addresses the seduction of plausible output that Segal describes. Outcome measurement addresses the short-termism of current success metrics. Design for decreasing necessity addresses the dependency-maximization incentive that structures current development.
The technical feasibility of each specification is not in question. A reflection prompt is a string of text presented at a configurable interval. A session summary is a computation performed on data the tool already collects. The engineering complexity is modest. What prevents implementation is economic: every specification reduces engagement as currently measured, and the market rewards engagement. The obstacle is the incentive structure, not the technology.
The parallel with other domains is instructive. Seatbelts, airbags, and crumple zones were once considered unimplementable — threats to the product's appeal and the manufacturer's profit margins. Regulatory intervention, beginning in the 1960s, imposed safety requirements on all manufacturers simultaneously, eliminating the competitive advantage of unsafe design. The result was safer cars without reduced sales. The same logic applies to humane AI design: collective constraint that imposes humane design costs equally across the industry would eliminate the competitive disadvantage and create a market in which the specifications could compete without penalty.
The specifications were developed by Raskin, Harris, and colleagues at the Center for Humane Technology across the years following the organization's 2018 founding. They draw on the technical tradition of value-sensitive design, the participatory-design tradition descending from Scandinavian workplace democracy movements, and decades of human-computer interaction research on reflection, agency, and informed consent.
Six core specifications. Reflection prompts, natural stopping points, cognitive health metrics, calibrated challenge, long-timescale measurement, and design for decreasing necessity.
Technical feasibility. Every specification can be implemented with existing technology; the obstacle is economic, not technical.
Collective action problem. Unilateral adoption produces competitive disadvantage; industry-wide adoption eliminates the disadvantage.
Inversion of dependency. The most radical specification — design for decreasing necessity — inverts the current industry incentive toward maximum dependency.