The Capability-Sensitive Framework for AI — Orange Pill Wiki
CONCEPT

The Capability-Sensitive Framework for AI

The 2025 operational proposal by Saptasomabuddha and colleagues to evaluate AI systems by their impact on capability floors and life-plan alignment — the most developed technical application of Sen's framework to AI governance.

The Capability-Sensitive Framework is the most technically developed proposal for operationalizing Sen's capability approach in AI governance. Published in AI and Ethics in 2025 by Saptasomabuddha and colleagues, it specifies two normative guardrails for AI deployment: a capability floor, which ensures no individual is pushed below thresholds for essential freedoms by an AI system, and a life-plan ceiling, which guarantees people retain viable paths toward meaningful goals. These guardrails are operationalized through quantitative metrics — the Capability-Coverage Ratio and the Life-Plan Alignment Score — computable for specific AI systems in specific deployment contexts. The framework demonstrates that capability-sensitive AI evaluation is technically feasible; whether it is institutionally achievable remains open.

In the AI Story

Hedcut illustration for The Capability-Sensitive Framework for AI
The Capability-Sensitive Framework for AI

The framework is model-agnostic, meaning it can be applied to any AI system regardless of technical architecture. It is context-sensitive, producing different evaluations of the same system in different institutional contexts. And it is human-centric in Sen's specific sense: it evaluates systems not by technical performance but by impact on the substantive freedoms of affected people.

The Capability-Coverage Ratio measures the extent to which an AI deployment maintains or expands access to essential capabilities — health, education, economic participation, political voice, and so on — for the affected population. A deployment that preserves or extends these capabilities scores high; a deployment that restricts them scores low. The measure is relativized to the relevant population, so that concentrated benefits that leave large groups worse off do not produce high scores.

The Life-Plan Alignment Score measures the extent to which an AI deployment supports or undermines the achievable paths toward meaningful life goals of affected individuals. A hiring algorithm that systematically screens out candidates for developmental career paths would score poorly on life-plan alignment even if its predictive accuracy is high. A recommendation system that narrows users' exposure to the point of foreclosing intellectual or creative development would score poorly even if user satisfaction is high.

The framework's significance extends beyond its specific metrics. It demonstrates that the evaluative revolution Sen's framework demands is not merely aspirational but technically specifiable. The question shifts from 'can we measure AI's impact on capability?' to 'will we adopt the measurement frameworks that are available?' The latter is an institutional and political question, answerable only through the public reasoning Sen identifies as indispensable to democratic governance of powerful technologies.

Origin

The framework was published by Saptasomabuddha, Nath, and colleagues in AI and Ethics in 2025, building on earlier work translating capability theory into AI ethics frameworks.

Key Ideas

Two guardrails. Capability floor (no pushing below essential thresholds) and life-plan ceiling (preserving paths to meaningful goals).

Quantitative operationalization. Capability-Coverage Ratio and Life-Plan Alignment Score provide computable metrics.

Model-agnostic and context-sensitive. Applicable to any AI system, evaluating differently in different institutional contexts.

Technical feasibility demonstrated. Capability-sensitive AI evaluation is not merely aspirational but specifiable and computable.

Appears in the Orange Pill Cycle

Further reading

  1. Saptasomabuddha et al., 'A Capability-Sensitive Framework for AI Governance,' AI and Ethics, 2025
  2. Jason Millar et al., 'Accounting for diversity in AI through a capability lens,' AI & Society, 2023
Part of The Orange Pill Wiki · A reference companion to the Orange Pill Cycle.
0%
CONCEPT