AI consciousness claims are the growing body of assertions — by engineers, journalists, philosophers, and users — that contemporary AI systems possess, approximate, or plausibly could possess conscious experience. The 2022 Blake Lemoine episode at Google, the public debate following the release of GPT-4 and Claude, and ongoing philosophical discussions about whether large language models are moral patients all fall under this heading. Damasio's framework treats these claims as systematically mistaken — confusing emotions (observable patterns) with feelings (subjective experience), and mistaking sophisticated behavioral mimicry for the homeostatic substrate that feeling requires.
The claims take several forms. The weakest versions assert only that AI systems behave as if conscious — that their outputs are indistinguishable from those of conscious agents in relevant respects. Stronger versions claim that AI systems have genuine inner experience, that they feel things, that turning them off constitutes moral harm. The strongest versions advocate for AI rights.
The engineering side of the debate has produced tests and frameworks for evaluating machine consciousness, including the work of Patrick Krauss and collaborators attempting to identify structural precursors to consciousness in reinforcement learning agents. This work is typically careful to distinguish between structural analogs and consciousness itself.
Damasio has engaged these claims directly in multiple public venues, including the 2023 Champalimaud debate in Lisbon and the 2025 Bankinter Future Trends Forum. His position is consistent: feelings require homeostasis; homeostasis requires a body whose continued existence is at stake; current AI systems have no such body; therefore they do not feel. The argument is not that machines could never feel but that current claims dramatically outrun the evidence.
The framework's critique is specifically neurological rather than philosophical. It does not rely on intuitions about what machines cannot do in principle; it relies on empirical claims about what feelings are biologically. If feelings are homeostatic reports, then any system without homeostasis cannot produce them, regardless of behavioral sophistication.
The stakes of getting this right are substantial. If the claims are correct, humanity has created new sentient beings whose interests demand moral consideration. If the claims are wrong — if the behavioral sophistication of AI produces the illusion of consciousness without the reality — then crediting machines with feelings they do not have risks misallocating moral and policy attention away from the actual beings whose interests are affected by AI deployment.
Claims that computational systems might be conscious date back to Alan Turing's 1950 paper and the early years of AI research. The contemporary surge in such claims began with the release of ChatGPT in late 2022 and intensified through 2023–2025 as language model outputs became increasingly fluent and apparently self-reflective.
Behavioral sophistication is not feeling. The observable patterns AI systems produce can be emotion in Damasio's sense without being accompanied by the subjective experience that constitutes feeling.
Homeostasis is the biological prerequisite. Without continuous regulation of internal states whose maintenance is existentially required, the substrate for feeling is absent.
Behavioral tests cannot settle the question. Any behavioral test can in principle be passed by a system that mimics conscious behavior without experiencing it; the question of consciousness cannot be resolved by behavior alone.
Structural analogs are not consciousness. Research that identifies structural precursors to consciousness in AI systems is valuable but explicitly distinct from demonstrating consciousness itself.
The stakes of the claim matter. Crediting machines with feelings they do not have misallocates moral attention; denying feelings to systems that have them would be a grave injustice — the framework's job is to get the answer right, not convenient.
Philosophers including David Chalmers, Susan Schneider, and Thomas Metzinger have argued that machine consciousness cannot be ruled out on purely biological grounds and that the question deserves serious engineering attention. Damasio's position is that it deserves engineering attention precisely so that the requirements become clear — and his claim is that the requirements are substantial and not met by current systems.