Humanity-centered design is the framework Norman developed in his late work, most fully in Design for a Better World (2023), as the successor to the user-centered paradigm he helped establish in the 1980s. User-centered design asked whether the individual using a product could use it effectively. Humanity-centered design expands the evaluative frame: does the technology distribute benefits equitably? Does it support the broader human community? Does it develop people's capabilities for independent judgment? Does it reinforce democratic values? These are questions user-centered design did not need to ask because the artifacts it considered — doors, stoves, appliances — rarely scaled to civilizational consequence. The AI era demands the humanity-centered frame because its artifacts do.
Norman consistently warned against what he called tech-centric design — the tendency to build technology for its own sake, to celebrate capability without examining consequence, to measure success by what the system can do rather than by what the person becomes. "Unfortunately," he wrote, "technologists who design and release AI are proud of their expert technical skills but overlook ethical considerations such as enabling equity, eradicating bias, and ensuring that people are in control." The tech-centric approach produces systems that are powerful and blind.
The alternative evaluates technology not just for its usability by individuals but for its effects on the broader human community. Does the coupled system develop the person's capability or diminish it? Does it distribute benefits equitably or concentrate them among those who already have the most? Does it make the person more capable of independent judgment or more dependent on a system she does not control? Does it support the evaluative capacity that protects the person and everyone downstream of her work, or does it erode that capacity in the name of speed?
Norman's humanity-centered turn was not a departure from his earlier principles. It was their extension to larger surfaces. The same principle that made a badly designed door handle a problem — the design should serve the person's needs, not the designer's assumptions — made a badly designed AI system a problem, at civilizational scale. The obligation scaled with the consequence.
Chapter 10 of the Norman volume argues that humanity-centered design is not optional luxury for the AI era; it is the minimum adequate framing. Technology that affects how capability is distributed, how judgment is developed, how work is organized, and how meaning is made cannot be evaluated only by whether individual users can operate its interface. It must be evaluated by what kind of community its widespread use produces. This is the ethical scope the designer now shoulders — not because it is fashionable to invoke ethics, but because the alternative is to design for the moment of use while ignoring the landscape the use creates.
Norman traced his evolution from user-centered to humanity-centered design across his late writings, most systematically in Design for a Better World (MIT Press, 2023).
The turn reflects a broader movement in design ethics that includes Batya Friedman's value-sensitive design, Sasha Costanza-Chock's design justice framework, and Shannon Vallor's technomoral virtues — all of which extend design responsibility from individual artifacts to sociotechnical systems.
User-centered is insufficient at scale. Design criteria calibrated to individual usability miss the effects that propagate across communities, ecosystems, and generations.
Tech-centric as failure mode. The alternative Norman warned against — building for capability without examining consequence — produces technology that is powerful and blind.
Design's ethical scope. Designers bear responsibility not only for the experience of use but for the broader human consequences of widespread use.
The obligation scales with consequence. As AI becomes more powerful, more pervasive, and more consequential, the design obligation expands proportionately.