"Homeostasis and soft robotics in the design of feeling machines," published in Nature Machine Intelligence in 2019 by Kingson Man and Antonio Damasio, is the clearest articulation of how the somatic marker framework might apply to artificial systems. The paper proposes that machines implementing processes analogous to homeostasis — continuous self-regulation of internal states whose maintenance is existentially required for continued operation — might thereby acquire the evaluative capacity that current AI systems lack. The proposal is not a claim that current AI feels; it is a blueprint for what building feeling machines would require, and a concession that current systems do not meet the requirements.
The paper emerged from a conversation Kingson Man had with Damasio about the difference between a dog navigating a sidewalk and a Roomba navigating a living room. The dog, with vastly less computational power, moved with adaptive intelligence the Roomba could not match. The difference, Man argued, was vulnerability — the dog's body could be hurt; the Roomba's could not.
The proposal has three components. First, machines would need bodies whose continued integrity is non-trivially at stake. Second, these bodies would need continuous internal monitoring generating states analogous to homeostatic feelings. Third, these states would need to feed back into the machine's decision-making, biasing processing toward actions that maintain the body and away from those that threaten it.
The technical requirements are substantial. Current AI architectures do not have bodies in the relevant sense — they run on hardware that can be replaced, migrated, or scaled without affecting the computational process. The proposal requires hardware whose integrity is computationally significant, which would represent a substantial departure from current practice.
The reception has been mixed. Some AI researchers have argued that the proposal confuses the evolutionary motivation for intelligence with its operational requirements — homeostasis was what natural selection needed to solve by developing intelligence, but AI need not replicate the evolutionary path. Damasio's response is that the question is not whether AI needs biological homeostasis but whether it needs some mechanism for generating evaluative commitments from within — and that externally specified objective functions are not adequate substitutes for internally generated caring.
The proposal is more important for what it concedes than for what it claims. By specifying what feeling machines would require, it makes clear what current AI systems lack. The blueprint for narrowing the evaluative gap also documents how wide the gap currently is.
The paper was published in Nature Machine Intelligence volume 1, October 2019, pages 446–452. Kingson Man, a neuroscientist at USC's Brain and Creativity Institute, developed the argument with Damasio over several years of collaboration on the relationship between robotics, affective computing, and the neurobiology of feeling.
Vulnerability is specified as the missing ingredient. Current AI systems lack bodies whose integrity is non-trivially at stake; adding such bodies is proposed as the path to feeling machines.
Homeostasis is the target mechanism. Self-regulation of internal states, with maintenance treated as existentially required, is identified as the architectural foundation.
Externally specified objectives are insufficient. Reward functions and training objectives tell a system what to optimize but not when the optimization has become dangerous — a judgment the paper argues requires internally generated feeling.
Soft robotics offers a path. Physical substrates whose integrity genuinely depends on their interactions with the environment provide the material substrate for homeostatic vulnerability.
The concession matters. By specifying requirements, the proposal makes clear that current systems do not meet them — a useful constraint on claims that AI already feels or cares.
Critics including Melanie Mitchell and others have argued that the proposal is either too ambitious (no current engineering can implement the required vulnerability) or unnecessary (intelligence can be achieved through other architectures). Supporters counter that the proposal is not a prescription for near-term engineering but a specification of what genuine machine feeling would require, which itself clarifies the claims that can and cannot be made about existing systems.