Consciousness, in its contested philosophical sense, is the property of having subjective experience: being a system for which there is "something it is like" to be that system (Nagel, 1974). The concept spans neuroscience (the neural correlates of consciousness), philosophy (the hard problem), and AI (could a machine be conscious?).
Consciousness is the AI question every other question eventually reduces to: What would it take to say a large language model is conscious? Do we have a test? If we don't, and systems are becoming more capable, do we owe them moral consideration? These questions are live now in ways they were not ten years ago.
The 2020s have made consciousness one of the more urgent practical questions in AI ethics, displacing decades when it was treated as a speculative topic. Working researchers at frontier labs have published internal evaluations of whether their systems may be conscious; academic philosophers have addressed the question in peer-reviewed journals; the usual institutions of professional caution (grant agencies, ethics boards) have begun to include consciousness considerations in their frameworks. None of this resolves the question. It does mean the question is no longer safely postponed.
The scientific study of consciousness goes back to William James's Principles of Psychology (1890) and later to the revival in the 1990s led by Francis Crick and Christof Koch (neural correlates), David Chalmers (the hard problem), Daniel Dennett (functionalist critique), and Giulio Tononi (integrated information theory). The modern period was institutionalized in the Association for the Scientific Study of Consciousness (ASSC, founded 1994) and the Journal of Consciousness Studies (1994).
In philosophy, the contested concept traces back to Descartes (17th century) and John Locke (who coined the modern English term in 1690). The contemporary debate about machine consciousness adds a fourth major node to the dialectic: neuroscience, philosophy, phenomenology, and now engineering.
Access vs. phenomenal. Access consciousness: information available for reasoning and report. Phenomenal consciousness: the subjective feel. The distinction (Ned Block) is key because AI might have the first without the second.
Neural correlates. Neuroscience has made progress on which brain states accompany reports of consciousness; this does not, by itself, explain why any state is accompanied by experience.
Integrated information theory (IIT). Giulio Tononi's mathematical theory: consciousness corresponds to a system's phi, a measure of integrated information.
Global workspace theory. Bernard Baars's model: consciousness is what happens when information is broadcast to a workspace accessible to many cognitive processes.
The moral status implication. If consciousness admits degrees (as IIT holds), then moral status may admit degrees. Contemporary work on animal welfare already uses graded frameworks; extending them to AI systems is an active area of applied ethics.
Behaviorist positions (Dennett, some physicalists) argue the concept is confused and will eventually be replaced by better ones. Realist positions (Chalmers, Block, Nagel, Tononi) argue consciousness is a real feature of the world requiring explanation. The AI era has raised the stakes of the debate because we now build systems whose consciousness status is uncertain but meaningful.