The framework is the operational version of Dennett's attack on the hard problem. David Chalmers had distinguished 'easy' problems of consciousness (explaining specific cognitive functions) from the 'hard' problem (explaining why there is something it is like to be the system at all). Dennett's counter-move: once you explain all the easy problems, there is nothing left to explain. The apparent hard problem is a residue of the Cartesian theater, of the user illusion's self-presentation, not a further fact about reality.
For AI, the framework is liberating. Instead of asking whether the system has the mysterious property of consciousness, it asks which specific tricks the system has — does it have self-modeling? attention? metacognition? narrative continuity? — and studies each empirically. The cumulative answer may be 'some but not others,' which is exactly what the framework predicts, and which resolves the question 'is it conscious?' by dissolving the unit of analysis into its operative parts.
Large language models exhibit a striking profile: extremely sophisticated narrative and inferential tricks, minimal persistent self-modeling, no continuous attention, no embodied grounding. The bag has some of the tricks and not others. Whether the subset that is present is enough for something to be like being the system is, on Dennett's view, a question whose very statement assumes a skyhook he does not accept.
The framework also explains why the AI-consciousness discourse is so intractable. It is treating a bag of tricks as if it were a unified substance, then asking whether the substance is present or absent. Once the bag is unpacked, the question changes — and the new questions are tractable, which is why Dennett insisted on unpacking it.
The image developed across Dennett's career but received its sharpest statement in Consciousness Explained (1991) and in his late essays and interviews on AI consciousness. The phrase 'bag of tricks' appeared repeatedly in his lectures and debates with Chalmers across the 1990s and 2000s.
By the 2020s, as AI systems began exhibiting various subsets of the tricks in the bag, Dennett argued that the framework was doing exactly what it was built to do: allowing empirical progress where metaphysical framing had produced stalemate.
No unified consciousness. What we call consciousness is a collection of distinct mechanisms that the brain integrates into an apparently unified experience via the user illusion.
Dissolve, don't solve. The hard problem is not solved by the framework; it is dissolved, by showing that its target — a unitary phenomenal property — does not exist as described.
Empirical tractability. Each trick in the bag can be studied with ordinary methods; there is no special science of consciousness required beyond the sciences of the tricks.
AI has some tricks. Current systems exhibit subsets of the bag — narrative, inference, certain forms of attention — and lack others, which is the honest starting point for thinking about what they are.