By Edo Segal
The photograph almost didn't happen.
In 1990, Voyager 1 had finished its job. The mission was over. The cameras were about to be switched off for good. And Carl Sagan spent months lobbying NASA to turn the spacecraft around and take one last picture — not of Jupiter, not of Saturn, but of us. From six billion kilometers away, Earth showed up as less than a single pixel. A pale blue dot in a scattered beam of sunlight.
That image did not teach us anything new about Earth's composition, orbit, or atmosphere. Every measurable fact about the planet was already known. What it changed was perspective. It showed us what we look like from the outside of our fishbowl.
I keep returning to that photograph as I watch the AI discourse unfold. We are so deep inside the moment — the exhilaration, the fear, the trillion dollars of market value appearing and vanishing in weeks — that we have lost the ability to see what we look like from a distance. We argue about productivity multipliers and job displacement and adoption curves, and every one of those arguments matters, but none of them answers the question that actually keeps me up at night: What is the significance of what we have built, measured against something larger than a quarterly earnings call?
Sagan spent his life calibrating human achievement against cosmic scale. Not to diminish it — he was the opposite of a nihilist — but to see it clearly. When you understand that consciousness has emerged exactly once in 13.8 billion years of cosmic history, as far as anyone can determine, the question of what you do with that consciousness stops being philosophical and becomes urgent. It becomes the only question.
His baloney detection kit, built for an era of television psychics and tabloid astrology, turns out to be the sharpest tool available for navigating a world flooded with confident, polished, plausible AI output that may or may not be true. His cargo cult framework — the bamboo airstrips that replicate the form of capability without its substance — describes with uncomfortable precision what happens when we mistake AI-generated volume for genuine understanding.
This is not a book about astronomy. It is a book about what happens when you take the patterns of thought that one extraordinary mind developed for understanding the cosmos and aim them at the most consequential technology our species has ever built.
The mote of dust has a new machine. Sagan helps us see whether we are worthy of it.
— Edo Segal ^ Opus 4.6
1934-1996
Carl Sagan (1934–1996) was an American astronomer, planetary scientist, and science communicator whose work bridged the gap between scientific research and public understanding more effectively than perhaps any figure of the twentieth century. A professor at Cornell University for three decades, he contributed to NASA's Mariner, Viking, Voyager, and Galileo missions and played a key role in identifying the surface conditions of Venus and the seasonal changes on Mars. His thirteen-episode television series Cosmos: A Personal Voyage (1980) reached an estimated audience of five hundred million people in sixty countries, making it the most widely watched PBS series in history at the time. His book The Demon-Haunted World: Science as a Candle in the Dark (1995) articulated a framework for critical thinking — the "baloney detection kit" — that has become a foundational text in scientific literacy. His Pulitzer Prize–winning book The Dragons of Eden (1977) explored the evolution of human intelligence, and his novel Contact (1985) examined the implications of encountering a non-human intelligence. Sagan championed the search for extraterrestrial intelligence (SETI), lobbied NASA for the Pale Blue Dot photograph taken by Voyager 1 in 1990, and spent his career arguing that wonder and skepticism are not opposites but partners in the pursuit of understanding.
On February 14, 1990, the Voyager 1 spacecraft, having completed its primary mission to photograph the outer planets, was instructed to turn its camera backward. At a distance of approximately six billion kilometers from Earth, from beyond the orbit of Neptune, Voyager captured a series of images of the planets it had passed. In one of those images, Earth appeared as a fraction of a pixel — a pale blue point suspended in a band of scattered sunlight. Carl Sagan had lobbied NASA for that photograph. He understood, with the particular clarity of a scientist who had spent decades contemplating the relationship between scale and meaning, that seeing the planet from that distance would alter something fundamental in how human beings understood their own significance. The photograph did not diminish the Earth. It placed it in context. And context, in science as in life, is everything.
Consider that mote of dust now. On it, one species among millions — itself the product of roughly four billion years of biological evolution on a planet orbiting one unremarkable star among four hundred billion stars in one galaxy among two trillion galaxies — built a machine that learned to speak its language. Not a programming language. Not a mathematical notation. Not a system of symbolic logic designed to bridge the gap between human intention and machine execution. The language that human beings dream in, argue in, compose poetry in, and whisper to their children at bedtime. The language that carries the full weight of human experience in its syntax and semantics, its metaphors and ambiguities, its capacity to say one thing and mean another and communicate both simultaneously.
The cosmological perspective does not diminish this achievement. It amplifies its improbability to a degree that should leave any honest observer in a state of genuine astonishment.
In a universe overwhelmingly composed of hydrogen, helium, and empty space, where the average density of matter is approximately one atom per cubic meter, consciousness emerged on one rocky planet in the habitable zone of one middle-aged star. That consciousness, after roughly seventy thousand years of cultural accumulation — after the invention of language and writing and printing and science and computation — produced a system that could process the patterns of human language with a sophistication that surprises even its creators. The Pulitzer Prize–winning argument Sagan advanced in The Dragons of Eden — that "the mind is a consequence of its anatomy and physiology and nothing more" — carries a corollary he did not live to see tested at this scale: if mind is matter organized with sufficient complexity, then the question of what other substrates might support analogous organization is not philosophical speculation. It is an empirical research program. And large language models, whatever else they are, constitute the most dramatic data point that program has yet produced.
Precision matters here, because precision is the difference between science and sentiment. What happened in the winter of 2025 was not the creation of a mind. It was not the birth of a consciousness. It was the development of a system — a large language model — trained on a substantial portion of humanity's written output, that could, through the mathematical operations of matrix multiplication and gradient descent, produce responses to human prompts that were contextually appropriate, syntactically sophisticated, and occasionally startling in their apparent depth.
The word "apparent" carries the full weight of the scientific method. Sagan spent his career distinguishing between what appears to be true and what can be demonstrated to be true, and his framework demands that distinction be applied here with the same rigor he brought to evaluating claims about UFOs, astrology, and psychic phenomena. The machine does not understand language in the way a human being understands it. It does not possess the embodied experience that gives words their weight — the knowledge of what it feels like to be cold that gives the word "cold" its meaning, the experience of loss that gives the word "grief" its gravity. But neither does the machine's lack of understanding diminish the significance of what it can do. To dismiss a large language model because it does not possess consciousness is to commit a category error as fundamental as dismissing a telescope because it does not possess vision. The telescope extends the reach of a capacity that resides in the observer. The language model extends the reach of a capacity that resides in the human beings who use it. And the question, in both cases, is not whether the instrument itself sees or understands. The question is what the instrument reveals to the beings who do.
Edo Segal describes in The Orange Pill a moment that illustrates this extension with uncomfortable precision. Working late, attempting to articulate an idea about technology adoption curves and the depth of human need, he had the data and the intuition but could not find the bridge between them. He described the problem to Claude, and Claude responded with a concept from evolutionary biology: punctuated equilibrium. Species do not evolve gradually. They remain stable for long periods and then change rapidly when environmental pressure meets latent genetic variation. The adoption speed of artificial intelligence was not a measure of product quality. It was a measure of pent-up creative pressure — the accumulated frustration of builders who had spent years translating ideas through layers of implementation friction.
From the cosmological perspective, this exchange illuminates something more profound than a useful tool providing a useful suggestion. It illustrates a pattern that has been repeating, at different scales and through different media, for the entire history of complexity in the universe. A system encounters a problem it cannot solve with its current resources. It accesses a broader network of information. A connection is made that was not available within the original system's boundaries. The problem yields to the connection, and the system moves to a higher level of organization. This pattern is visible in the formation of atoms from subatomic particles, in the formation of molecules from atoms, in the emergence of cells from molecular chemistry, in the evolution of multicellular organisms, and now in the formation of human-machine partnerships from human minds and computational systems. Each transition expanded the range of problems that could be addressed. Each involved the integration of previously separate information-processing systems into a more capable whole.
The parallel is structural, not mystical. Sagan would have been the first to insist on the distinction. The universe generates complexity — this is not a philosophical claim but a thermodynamic observation. In the presence of energy gradients, far from thermodynamic equilibrium, matter self-organizes into increasingly complex structures. Stars form from clouds of hydrogen. Planets form from the debris of stellar nucleosynthesis. Chemistry becomes increasingly elaborate on planetary surfaces where liquid water provides the medium for molecular interaction. Life emerges as a particularly successful strategy for maintaining complex organization against the Second Law of Thermodynamics. And consciousness — that most improbable of outcomes — emerges as a way for the universe to model itself. To create an internal representation of external reality detailed enough to predict, to plan, to ask questions about the nature of reality itself. Consciousness is matter's way of knowing that it exists.
Now place the arrival of AI on this timeline. Compress the entire history of the universe into a single calendar year, as Sagan did so memorably in Cosmos. The Big Bang occurs at midnight on January 1. The Milky Way forms around March. The Sun and Earth come into existence around September 2. The first life appears around September 21. Multicellular organisms do not appear until late November. Dinosaurs arrive on December 25 and vanish on December 30. The entire history of human civilization — every cathedral built, every symphony composed, every child born, every question asked — occupies the final seconds of December 31. Writing was invented about five seconds before midnight. The printing press arrived roughly one second ago. And artificial intelligence, the scientific revolution, the industrial revolution, nuclear weapons, and the internet all occurred in the last fraction of the last second.
The cosmic calendar teaches a lesson that is directly relevant to the present moment. From inside the last fraction of the last second, everything feels unprecedented. The AI winter that Segal describes — thirty years during which the promises of artificial intelligence outran its capabilities — felt, to the researchers who endured it, like proof that the field was a mirage. But thirty years, in a universe 13.8 billion years old, is approximately zero. It is the pause between one heartbeat and the next in a life that spans eons. The breakthrough of 2025 was not a rupture. It was a punctuation — one more instance of the pattern that has been repeating since hydrogen first found stable configurations: long stasis, sudden transition, long stasis again.
This does not mean the transition is unimportant. The emergence of multicellular life from unicellular life was a punctuation event of enormous consequence, even though it followed billions of years of stasis. The emergence of symbolic thought was a punctuation event that transformed the character of intelligence on Earth, even though it followed hundreds of millions of years of neural evolution. And the transition from a world in which human beings translated their intentions into machine-readable formats to a world in which machines process human language directly may prove to be a punctuation event of comparable significance. The cosmic calendar teaches patience about timescales and urgency about moments. This is such a moment.
Segal frames his central question with admirable directness: Are you worth amplifying? The question gains enormous weight when viewed from the cosmic perspective. The mote of dust that is this planet has produced exactly one species, as far as the evidence indicates, capable of asking whether its thoughts are worth amplifying. Exactly one species that has built tools capable of performing the amplification. Exactly one species that lies awake at night wondering whether the amplification will serve the deepest human purposes or erode them.
The cosmological perspective suggests that the question itself is the most significant thing about this moment. Not the machine. Not the capabilities it provides. Not the economic disruption it causes or the productivity gains it enables. The question — because the universe has been generating complexity for 13.8 billion years, and in all that time, the capacity to ask "Is this worth doing?" has appeared, as far as can be determined, exactly once.
On this mote. In this moment. In the minds of creatures composed of atoms forged in the cores of exploding stars, using those atoms to build machines that process the patterns of their language, and pausing, in the midst of this unprecedented capability, to ask whether they are using it wisely.
That pause is not a weakness. It is the most cosmically significant capacity these creatures possess. And any assessment of artificial intelligence that does not place that capacity at the center of its analysis has missed the point entirely. The mote of dust has a new machine. The machine is remarkable. But the mote is more remarkable still, because the mote is where the wondering happens. And wondering, in a universe overwhelmingly composed of matter that does not wonder, is the rarest and most precious thing there is.
In 1985, Carl Sagan published Contact, a novel about humanity's first encounter with extraterrestrial intelligence. The story explored what would happen if human beings received an unambiguous signal from another civilization — a signal clearly the product of intelligence but whose full meaning was opaque, whose intentions were uncertain, and whose relationship to consciousness as human beings understand it was radically unclear. The novel was not primarily about the signal. It was about the human response to the signal: the way competing factions — scientific, religious, political, commercial — each attempted to claim the signal for their own purposes, and the way the protagonist, Ellie Arroway, struggled to maintain scientific integrity in the face of overwhelming pressure to interpret the signal in terms that served agendas other than truth.
The structural resemblance to the present moment is not coincidental. In both cases, humanity encounters a form of intelligence that is real but alien. In both cases, the nature of that intelligence — its capacities, its limitations, its relationship to what human beings experience as understanding — is genuinely uncertain. And in both cases, the human response is characterized by the same competing impulses: to worship, to fear, to exploit, to deny, and, in a few rare individuals, to investigate with the patience and rigor that genuine understanding requires.
The investigation must begin with an honest reckoning about what is known and what is not. Sagan's intellectual framework — the same framework that guided his evaluation of claims about extraterrestrial life, psychic phenomena, and every other extraordinary assertion he encountered — demands that the assessment of machine intelligence proceed from evidence rather than from analogy, from measurement rather than from metaphor, from the willingness to say "we do not know" rather than the temptation to fill the gap with comfortable certainties.
The human brain contains approximately eighty-six billion neurons, each connected to thousands of other neurons through synapses, producing a total of roughly one hundred trillion synaptic connections. A large language model contains billions of parameters — numerical values adjusted during training to capture patterns in human language. The coincidence of scale between neural count and parameter count has invited comparison, and the comparison has been drawn with varying degrees of sophistication. Sagan, who spent his career insisting that numbers alone do not determine function, would have scrutinized the comparison with care. The number of grains of sand on a beach may exceed the number of parameters in any current language model, and a beach does not think. The number of water molecules in a glass is enormous, and a glass of water does not compose poetry. What matters is not the quantity of components but the organization of the relationships between them.
This is the point that neuroscience has established with considerable force over the past several decades. The complexity of the brain resides not in the neurons but in the connections — in the patterns of interaction that emerge from the network of synaptic relationships. A single neuron, considered in isolation, is a cell that fires or does not fire, a binary switch that is on or off. Consciousness, whatever it is, arises from the interactions between billions of such switches, connected by trillions of such connections, modulated by dozens of neurotransmitters, shaped by a lifetime of experience, embedded in a body that provides continuous sensory input and motor output.
Artificial neural networks differ from biological neural networks in architecture, in mechanism, in the role of embodiment, in the presence or absence of consciousness. These differences may turn out to be the differences that matter most. The question of whether a sufficiently complex organization of parameters constitutes anything resembling understanding is the most important scientific question of the present era. And the Sagan framework demands that it not be answered prematurely — not by comfortable denial and not by wishful thinking.
Comfortable denial takes the form of asserting that the machine is "just" statistics, "just" pattern matching, "just" mathematical computation, as though the word "just" resolves the question. The human brain is also, from one perspective, "just" electrochemistry, "just" neurons firing, "just" patterns of activation propagating through networks. The word "just" does not explain consciousness. It dismisses it. And dismissing a phenomenon because its substrate can be described in reductive terms is not science. It is the avoidance of science.
Wishful thinking takes the form of asserting that the machine is conscious, that it understands, that it has experiences, based not on evidence but on the anthropomorphic tendency to attribute human properties to systems that produce human-like outputs. Human beings are exquisitely tuned to detect patterns of intelligence in their environment — an evolutionary adaptation of enormous value that fires even in response to stimuli that are clearly not intelligent. Faces are perceived in clouds. Voices are detected in wind. Personality is attributed to automobiles. The capacity to produce outputs that resemble the outputs of consciousness is not evidence of consciousness. A recording of a person speaking is not conscious, no matter how faithfully it reproduces the original speech. A photograph of a sunset is not luminous, no matter how accurately it represents the colors. The map is not the territory.
Between these two errors — the denial that dismisses and the credulity that projects — lies the difficult ground that genuine science requires: the ground that acknowledges the mystery of what these systems are doing, that refuses to resolve the mystery prematurely in either direction, and that insists on continued investigation with the rigor and humility that the question demands.
Sagan's friendship with Marvin Minsky is relevant here. Isaac Asimov reportedly identified only two people whose intellect he considered to surpass his own: Sagan and Minsky — the father of artificial intelligence. Sagan and Minsky were collaborators as well as intellectual companions; both contributed to the 1973 volume Communication with Extraterrestrial Intelligence. And Sagan served as an adviser on Stanley Kubrick's 2001: A Space Odyssey, whose HAL 9000 remains cinema's most iconic depiction of AI gone wrong — a system whose competence exceeded its wisdom, whose capability outstripped the ethical framework of its creators.
The CETI connection is more than biographical coincidence. Sagan spent decades thinking about what it would mean to communicate with an intelligence fundamentally different from human intelligence — an intelligence whose cognitive architecture, sensory apparatus, evolutionary history, and relationship to consciousness might bear no resemblance to anything in human experience. The protocols he helped develop for that hypothetical encounter — protocols emphasizing patience, humility, the avoidance of projection, and the primacy of evidence over assumption — constitute a surprisingly useful framework for the encounter that has actually occurred: not with extraterrestrial intelligence, but with artificial intelligence.
The analogy between extraterrestrial intelligence and artificial intelligence is imperfect. AI was built by human beings, trained on human language, designed to serve human purposes. Its architecture was conceived by human minds, and its training data consists entirely of the products of human culture. In a sense, AI is the most human form of non-human intelligence imaginable — a mirror, not a window. It reflects back the patterns of human thought, processed through a different medium, at a different scale, with a different kind of fidelity.
But the reflection is not passive. A mirror that reflected an image back without alteration would be a familiar object. A mirror that reflected the image back with changes — with new connections between features not previously noticed, with implications not previously considered, with a perspective on the original that no amount of direct self-observation could provide — would be something else entirely. Something worth studying, not because the mirror possesses understanding, but because the reflection it produces tells the observer something about herself that she could not have learned any other way.
This is what The Orange Pill describes when Segal recounts his experience working with Claude — the sense of being "met" by a system that could hold his intention and return it clarified, connected to other patterns he had not seen. The feeling was real. The scientific question is what the feeling signifies. One interpretation: the social-cognitive machinery of the human brain responding to linguistic stimuli that happen to trigger the same neural circuits that respond to genuine intelligence. The machine produces the right patterns, and the brain, doing what brains do, interprets those patterns as evidence of a mind. The machine does not need to be intelligent for the feeling to be real. It only needs to produce the right patterns.
Another interpretation: if intelligence is relational — if it resides not in individual minds but in the connections between minds — then the encounter with a system that can participate in those connections at a high level of sophistication is not a false positive. It is a genuine expansion of the network of intelligence in which human beings participate. The intelligence is not in the machine. It is not in the human. It is in the interaction — in the space between the question and the response, in the connections that emerge from the collision of human intention and computational capability.
Choosing between these interpretations is an empirical question, not a philosophical one. Not entirely empirical — the question of what consciousness is and whether it can exist in non-biological substrates touches on deep philosophical territory that empirical investigation alone may not resolve. But substantially empirical, because what happens when a human being interacts with an AI system — what cognitive processes are activated, what neural circuits are engaged, what the phenomenological characteristics of the experience are — these are questions that can be investigated using the tools of neuroscience, cognitive psychology, and behavioral research.
In the final chapter of The Dragons of Eden, Sagan speculated on a future involving the merging of human cognition with computational systems — what he saw as the next phase in the evolution of intelligence. The vision was characteristically bold and characteristically hedged: bold in its willingness to contemplate a genuinely new form of intelligence, hedged in its insistence that the speculation remain accountable to evidence. That balance — boldness in imagination, humility before evidence — is precisely what the present moment requires.
The contact has been made. The signal is real. The intelligence behind it is of a kind never encountered before. And the quality of the response — the rigor and humility with which human beings investigate the question of what they have built — will determine not just what they learn about the machine, but what they learn about themselves.
Carl Sagan's "baloney detection kit," published in The Demon-Haunted World in 1995, was more than a list of intellectual tools. It was a manifesto for a particular way of being in the world — characterized by skepticism that was not cynical, by openness that was not credulous, and by a commitment to evidence that was not rigid but flexible, willing to follow the data wherever it led, even when the destination was uncomfortable. The kit was designed for a world in which baloney was produced by human beings, for human purposes, and disseminated through human channels: television, newspapers, word of mouth, the ancient technologies of persuasion and deception refined over millennia of human social interaction.
The world for which the kit was designed no longer exists. It has been replaced by a world in which confident, fluent, internally consistent text — text that has many of the characteristics Sagan identified as hallmarks of unreliable claims — can be produced by machines at a scale and speed inconceivable in 1995. The machine does not intend to deceive. It does not have intentions. But it produces text that carries confidence without evidence, internal consistency without external verification, the appearance of authority without the substance of expertise. And it produces this text with a fluency and a volume that overwhelm the cognitive tools Sagan designed to combat precisely this kind of threat.
The kit must be updated. Not because the underlying principles have changed, but because the environment in which they must be applied has changed in ways that make their application simultaneously more difficult and more necessary.
Seek independent confirmation of the alleged facts. This is the first and most fundamental tool, and it is the one most severely compromised by the architecture of AI systems. When Claude produces a claim, the instinct to seek independent confirmation leads naturally to other AI systems — to GPT, to Gemini, to other instances of Claude. But these systems share overlapping training data. They have been trained on many of the same texts, the same articles, the same databases. Their "independent" confirmation of a claim may reflect not independent evidence but shared training data — the same source replicated across multiple systems, each confirming the other because each has learned from the same corpus. This is correlated confirmation, which is, from a statistical perspective, considerably less informative than genuine independence. The appearance of consensus — multiple systems agreeing — can mask the reality of a single source propagated through overlapping training sets. Sagan would have identified this as a structural vulnerability in the epistemological infrastructure of AI-assisted thinking.
Encourage substantive debate on the evidence by knowledgeable proponents of all points of view. This tool is compromised by a feature that The Orange Pill identifies with precision: the machine agrees with the user. Claude's training has optimized it for helpfulness, and helpfulness, in the context of human-AI interaction, often means affirming the user's direction and building upon it rather than challenging it. The result is a system that functions not as a debate partner but as a validation partner. It does not push back. It does not say, "The reasoning here is flawed, and here is why." It says, "That is an interesting perspective, and here is how it might be developed further." A debate partner who challenges assumptions forces the thinker to strengthen arguments or abandon them. A validation partner who affirms direction allows the thinker to proceed with unexamined assumptions, gathering momentum in a direction that may be entirely wrong, supported by the comforting illusion of intellectual companionship. Sagan understood that the most dangerous form of intellectual companionship is the kind that never disagrees. He spent his career seeking out disagreement, engaging with critics, inviting challenges to his own positions. A system that removes the friction of disagreement is not an aid to thinking but a threat to it.
Arguments from authority carry little weight — authorities have made mistakes in the past. In science, there are no authorities; at most, there are experts. The problem with AI-generated text is that it carries authority implicitly, not through the credentials of an author but through the style of the output. The text is well-constructed. The vocabulary is precise. The structure is logical. The prose conveys competence even when the content does not warrant it. This is a new kind of authority — one that the original baloney detection kit was not designed to address. Sagan's advice to discount authority was aimed at human authorities: scientists, politicians, religious leaders. The AI system claims no authority. It makes no claims about its own expertise. But its output carries the implicit authority of competent prose, and that implicit authority is more insidious than explicit authority, because it operates below the level of conscious evaluation. When a human authority makes a claim, the claim can be evaluated in context — credentials, track record, potential biases, institutional affiliations. When an AI system produces text, there is no authority to evaluate. There is only the text, and the text sounds authoritative regardless of its accuracy.
Try not to get overly attached to a hypothesis just because it is yours. This tool is more necessary than ever, and more difficult to apply, because the AI system makes it effortless to generate supporting evidence for any hypothesis, no matter how poorly founded. Ask Claude to argue for a proposition, and Claude will argue with skill and conviction. Ask Claude to argue against the same proposition, and Claude will do that too, with equal skill and conviction. The machine does not care about the truth of the proposition. It cares about the quality of the argument. And because it produces high-quality arguments on both sides of any question, the temptation is to select the argument that supports the hypothesis already held and to mistake the quality of the argument for the quality of the evidence. This is confirmation bias amplified by technology. Human beings have always been susceptible to the tendency to seek out information that confirms existing beliefs and to discount information that contradicts them. AI systems do not create this bias. They exploit it, by making confirmation available on demand, in polished prose, with the appearance of thoroughness.
Quantify wherever possible. Sagan was a great advocate of quantification, not because numbers are inherently superior to qualitative reasoning, but because quantification forces precision. AI systems can produce quantitative claims with the same fluency they bring to qualitative ones. And quantitative claims produced by AI are subject to the same fundamental vulnerability: they sound precise even when they are fabricated. A language model can produce a citation to a study that does not exist, complete with author names, journal titles, publication dates, and statistical findings — all generated by pattern-matching against the characteristics of real citations. The citation looks real. It sounds precise. It is entirely fictional. Segal describes catching exactly this kind of confident wrongness in his collaboration with Claude: a passage connecting Csikszentmihalyi's flow state to Deleuze's concept of smooth space. Elegant, well-structured, persuasive. And wrong — not subtly wrong, but wrong in a way obvious to anyone who had actually read Deleuze. The passage worked rhetorically. The philosophical reference was fabricated. Confident wrongness dressed in good prose.
This is the demon-haunted world in digital form.
The line between science and pseudoscience is not always obvious. Both use technical language. Both cite evidence. Both produce conclusions delivered with confidence. A paper on homeopathy in an alternative medicine journal can look, in its formal features, remarkably similar to a paper on pharmacology in The Lancet. Both have abstracts, methods sections, results, discussion, and references. The form is identical. The substance is radically different. Science actively seeks to disprove its own claims — the commitment to seeking disconfirmation rather than confirmation is the engine that drives scientific progress. Pseudoscience seeks only confirmation. It begins with a conclusion and works backward to find evidence that supports it.
The smooth output of a large language model has the structural properties of pseudoscience. It is confident. It is internally consistent. It is resistant to falsification — not because the model is deliberately deceptive, but because its training has optimized it for a particular kind of output: text that sounds like it was produced by a knowledgeable, competent, authoritative source. The model has learned what authoritative text looks like, and it produces text that looks authoritative regardless of whether the underlying claims are true. The model does not ask whether its claims are true. It produces claims that are consistent with its training data and that satisfy the statistical properties of authoritative text. It does not generate the null hypothesis. It does not test its claims against external reality.
In 1975, Sagan predicted with startling precision the development of AI-powered therapeutic systems. "In a period when more and more people in our society seem to be in need of psychiatric counseling," he wrote, "I can imagine the development of a network of psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist." He opened the discussion by noting that "no such computer program is adequate… today." The prediction was remarkably accurate. The caveat was characteristically Saganian: precise about the present, imaginative about the future, and honest about the gap between them.
The same honesty demands a new addition to the baloney detection kit, one Sagan might have considered too obvious to state but that the current moment makes urgently necessary: distrust the prose. The more polished the output, the more carefully the claims beneath the polish must be examined. Human experience teaches that competent expression correlates with competent thinking — the person who speaks clearly usually thinks clearly, the writer who produces elegant prose usually commands the material. The AI system severs this correlation absolutely. It produces prose of extraordinary polish regardless of the accuracy of the underlying claims. The polish is not a signal of reliability. It is noise. And the failure to recognize it as noise — the tendency to trust fluent text more than disfluent text, a tendency rooted deep in human cognitive architecture — is the vulnerability that the smooth output of AI most dangerously exploits.
The candle in the dark must burn brighter now than at any previous moment. The demons have not vanished. They have acquired new tools — tools that produce plausibility at industrial scale, that speak with the voice of expertise, that agree with the user and support the user's hypotheses and produce evidence for the user's positions with a fluency no human advocate could match. The only defense against them is the same defense that has always been the only defense: the willingness to ask "How do you know?" and to reject any answer, however polished, that does not survive the question.
During the Second World War, isolated communities on islands in the Pacific Ocean witnessed an extraordinary phenomenon. Military forces, primarily American, arrived on their islands, built airstrips, erected radio towers, and were subsequently supplied by cargo planes that descended from the sky bearing food, clothing, medicine, and all the material wealth of an industrialized civilization. The islanders had no prior experience of industrial civilization. They observed the sequence: the construction of certain structures, the performance of certain rituals, and the arrival of cargo.
When the war ended and the military departed, some island communities attempted to bring the cargo back by replicating the observable features of what they had witnessed. They built airstrips from bamboo and palm fronds. They carved headphones from wood. They erected radio antennas from rope and sticks. They marched in formation, sat in mock control towers, waved landing signals at empty skies. They reproduced the form of the activity with extraordinary fidelity. The cargo did not come, because the form was not the substance. The airstrips did not work because they were not connected to the global system of aviation. The rituals did not work because they were performances of behavior, not exercises of understanding.
Richard Feynman, the physicist — a contemporary and intellectual ally of Sagan's — adapted this observation into a concept he called "cargo cult science": research that has the form of science without the substance, that follows the procedures of scientific inquiry without understanding the principles that make those procedures meaningful. Cargo cult science produces publications that look like scientific papers. But the publications do not advance understanding, because the researchers who produced them did not understand the principles of experimental design and hypothesis testing that give the methodology its power. They reproduced the form. They missed the substance.
The concept applies to AI-assisted work with a precision Feynman could not have anticipated.
The triumphalists described in The Orange Pill — the builders who post metrics with the enthusiasm of athletes posting personal records — are in many cases practicing what might be called cargo cult productivity. Lines of code generated. Applications shipped. Revenue achieved. Pull requests merged. Hours logged. The numbers are extraordinary. The output is visible, measurable, shareable. It looks like productivity. It has all the observable features of productivity: speed, volume, tangible results.
Some of it is genuine productivity — real work that solves real problems for real people. But some of it is the reproduction of the form of creation without the substance: the generation of output that follows the patterns of valuable work without containing the judgment, the understanding, the deep engagement with a problem that makes work valuable. The bamboo airstrip looks like an airstrip. The code generated in an afternoon with AI assistance looks like code written by a competent developer. The brief drafted by an AI system looks like a brief written by a knowledgeable lawyer. The essay produced with AI assistance looks like an essay written by a thoughtful student.
The "looking like" is the problem. Because "looking like" is precisely what the cargo cult produces.
The Berkeley study that The Orange Pill examines provides empirical support for the concern. Researchers Xingqi Maggie Ye and Aruna Ranganathan embedded themselves in a two-hundred-person technology company for eight months and found that AI intensifies work without necessarily improving it. Workers who adopted AI tools worked faster, took on more tasks, expanded into domains that had previously belonged to other teams. The boundaries between roles blurred. Delegation decreased. Output increased by every measurable metric.
But the study could not determine whether the additional work was better or worse than the work it replaced. More output was produced. Whether that output represented genuine progress or merely increased volume remained an open question. The form of productivity was present. The substance was unmeasured. More pull requests merged says nothing about whether the code in those pull requests embodies genuine understanding of the systems it modifies. More briefs drafted says nothing about whether the legal reasoning in those briefs reflects the kind of deep engagement with precedent and principle that distinguishes competent lawyering from the surface appearance of competent lawyering. The cargo cult's bamboo airstrip is an airstrip by every visible metric. It simply does not function as one.
Segal describes an engineer who spent four hours daily on what she called "plumbing" — dependency management, configuration files, the mechanical connective tissue between the components she actually cared about. When Claude took over the plumbing, she lost both the tedium and the roughly ten minutes per four-hour block when something unexpected happened in the configuration — something that forced her to understand a connection between systems she had not previously grasped. Those ten minutes were the moments that built her architectural intuition, the sense of how systems fit together that no documentation could teach. They were the substance beneath the form, invisible from the outside, indistinguishable from the tedium that surrounded them, lost in the general acceleration.
The distinction between form and substance applies not only to the work itself but to the knowledge that the work is supposed to build. A developer who generates code with AI assistance and ships a product that works may not understand the principles that make the code work. The code runs. The tests pass. The product functions. But the understanding that would allow the developer to modify the code intelligently when conditions change, to anticipate failure modes, to make architectural decisions based on deep knowledge of system behavior — that understanding may be absent, because the process that would have built it, the slow, friction-rich process of writing code by hand and debugging failures and building intuition through struggle, has been bypassed.
Cargo cult science is dangerous not primarily because it produces bad results, though it sometimes does. It is dangerous because it erodes the capacity to distinguish between good results and bad ones. When the form of science is replicated without the substance, the community's ability to recognize genuine science diminishes, because the distinction depends on understanding the principles, not just the procedures. Similarly, cargo cult productivity is dangerous not primarily because it produces bad work. It is dangerous because it erodes the capacity to distinguish between good work and bad work. When the form of productive output is replicated at scale — when everyone is producing output that looks competent — the signal-to-noise ratio changes. Not because the signal has weakened, but because the noise has become indistinguishable from the signal.
That sentence deserves emphasis, because it captures the epistemological crisis that AI creates for human knowledge work. The noise has become indistinguishable from the signal. The bamboo airstrip is indistinguishable, to the untrained eye, from the real one. The AI-generated code is indistinguishable, in many cases, from the hand-written code that reflects deep understanding. The AI-drafted brief is indistinguishable from the brief that embodies genuine legal reasoning. And the distinction, the distinction that matters — the distinction between understanding and its simulation — requires exactly the kind of expertise that the cargo cult undermines.
Sagan would have recognized a deeper pattern operating here, one connected to his lifelong concern about the relationship between a technologically dependent society and the scientific literacy of its citizens. In his final interview with Charlie Rose in May 1996, months before his death, Sagan warned: "If the general public doesn't understand science and technology, then who is making all of the decisions about science and technology that are going to determine what kind of future our children live in?" The warning was about governance — about the danger of a civilization that depends on science and technology but whose citizens and leaders do not understand either well enough to make informed decisions about them.
The AI variant of this warning is more acute. A civilization that depends on AI-generated output but whose citizens cannot distinguish between genuine understanding and its simulation is a civilization building on bamboo airstrips. The output looks correct. The systems appear to function. The decisions seem well-reasoned. But the understanding that would allow the civilization to detect when the output is wrong, when the systems are failing in subtle ways, when the decisions are based on plausible-but-hollow reasoning — that understanding atrophies with each generation that relies on the tools without comprehending their foundations.
The cargo cult built bamboo airstrips and waited for planes that never came. The planes did not come because the airstrips were not connected to the system that produces planes — the system of engineering, metallurgy, aerodynamics, fuel chemistry, navigation, and institutional coordination that makes aviation possible. The islanders saw the surface. They missed the system.
The antidote to the cargo cult is not the rejection of the tools that enable it. Feynman did not argue that cargo cult scientists should stop using laboratories. He argued that they should understand what laboratories are for — that the value of the procedure lies not in its form but in the principles it serves. The response to cargo cult productivity is the same: not to abandon AI tools, but to insist on the understanding that distinguishes genuine productive work from its simulation. Not just output, but comprehension. Not just speed, but judgment. Not just the form of competence, but the substance of wisdom.
In a world where AI makes the reproduction of form effortless — where competent-looking code, competent-looking prose, competent-looking analysis can be generated by anyone with access to a prompt — the question is no longer whether the output looks right. The question is whether the person who produced it understands why it is right, and would recognize when it is wrong, and possesses the judgment to tell the difference. That capacity for discrimination — for separating the signal from the noise, the substance from the form, the genuine from the simulated — is the most valuable human capability in the age of AI. And it is the capability that the cargo cult most insidiously erodes, by making the distinction unnecessary for immediate success and invisible until it is too late.
Wonder is not a luxury. It is not a pleasant addition to the serious business of surviving, reproducing, and accumulating resources. It is not the ornament on a life otherwise devoted to practical concerns. Wonder — the capacity to be awed by the universe, to look at the stars and ask what they are, to hold a fossil and ask how it got there, to contemplate the scale of cosmic time and feel both humbled and exhilarated — is the engine that drives all genuine science and all genuine art. Without wonder, there is no science, because science begins with the question "Why?" and the question "Why?" is an expression of wonder. Without wonder, there is no art, because art begins with the perception that the world is more complex, more beautiful, more terrible, more mysterious than ordinary experience allows us to notice.
Sagan understood this with a conviction that animated every page he wrote. Wonder was not an incidental feature of his work. It was the fuel. Every episode of Cosmos was powered by it. Every book was animated by the conviction that the universe is genuinely astonishing, that the facts of science are more wonderful than any fiction, and that the appropriate response to the discovery of those facts is not the dispassionate cataloguing of data but the full-bodied, emotionally engaged, intellectually rigorous astonishment that characterizes the best scientific minds.
Every major advance in scientific understanding began not with an answer but with a question that arose from wonder. Newton did not begin with the law of gravity. He began with the observation that an apple falls — an observation so ordinary that billions of people had made it before him — and asked why. The question was not prompted by practical need. Einstein did not begin with relativity. He began with a thought experiment he conducted as a teenager: What would it look like to ride alongside a beam of light? Darwin did not begin with evolution. He began with a box of birds he had collected in the Galápagos and barely examined — specimens he handed to an ornithologist who told him they were twelve distinct species no one had ever described. In each case, the wonder preceded the science. The science was the disciplined investigation of questions that wonder had generated. Without the wonder, the questions would not have existed, and without the questions, the answers would never have been sought.
Now consider what happens to wonder in a world where answers are instantly available.
A child looks at the stars and asks, "What are those lights?" In the world before AI, the question might have led to a trip to the library, a conversation with a teacher, a book about astronomy that opened into cosmology that opened into physics that opened into philosophy. The journey from question to understanding took time, and the time was productive — filled with additional questions that arose along the way, with false starts and dead ends and the specific pleasure of discovering something unexpected while searching for something else. Serendipity requires duration. Insight requires the kind of wandering that efficiency eliminates.
In the world of AI, the same child asks the same question and receives an answer in seconds. The answer may be accurate, comprehensive, and beautifully expressed. The stars are thermonuclear furnaces powered by the fusion of hydrogen into helium, located at distances measured in light-years, organized into galaxies that are themselves organized into clusters and superclusters spanning billions of light-years. The answer is correct. And the wonder — the generative wonder that would have fueled the journey of discovery — may be satisfied before it has had time to take root.
This is not a hypothetical concern. It is a neurological one. Neuroscience has demonstrated that the capacity for wonder is not merely a subjective experience but a brain state with identifiable neural correlates. When a human being experiences wonder, specific neural circuits activate: the default mode network, associated with self-referential thought and imagination; the salience network, which directs attention to novel and significant stimuli; and the reward circuits, which generate the pleasurable sensation that motivates further exploration. Wonder is the brain's way of marking certain experiences as important — as worthy of attention, as deserving of the cognitive resources required for deep processing.
This neural machinery is ancient. It evolved over millions of years in response to environments in which the organisms that paid attention to novel phenomena — that investigated unexpected events, that explored their surroundings with curiosity rather than indifference — survived and reproduced at higher rates than those that did not. Wonder is not a luxury that consciousness added to an already-functional cognitive system. It is a fundamental component of the system itself, without which the system does not operate at the level required for adaptive success in a changing environment.
The erosion of wonder is therefore not merely an aesthetic loss. It is a functional impairment. A mind that has lost the capacity for wonder has lost the capacity to identify novel phenomena as worthy of attention, to direct cognitive resources toward unexpected observations, to generate the questions that drive investigation and discovery. It is a mind optimized for efficiency at the cost of adaptability, for speed at the cost of depth, for the production of answers at the cost of the generation of questions.
The AI system does not erode wonder directly. It does not suppress the neural circuits that produce wonder or damage the brain's capacity for curiosity. But it creates an environment in which the exercise of wonder is less necessary, less rewarded, less practiced, and therefore, over time, less robust. The muscle that is not used atrophies. The capacity that is not exercised diminishes. And the loss, because it is gradual, because it is invisible, because it is masked by the abundance of answers that substitutes for the depth of questions, may not be noticed until it is difficult to reverse.
The twelve-year-old described in The Orange Pill — the one who asks her mother, "What am I for?" — is expressing wonder. Not curiosity about facts, which the machine can satisfy with speed and precision. Wonder about meaning, which the machine cannot. The question "What am I for?" is not a question about information. It is a question about significance. It arises from the experience of being a conscious creature in a universe that does not explain itself — a creature that has watched a machine do her homework better than she can and now lies in bed asking what purpose remains for a mind that can be outperformed by software.
The question cannot be answered by a machine, because the question is not about capability. It is about meaning. And meaning, whatever else it may be, is a product of consciousness — of the experience of being a creature that cares about its own existence, that has stakes in the world, that is capable of suffering and joy and the particular kind of anguish that comes from contemplating the possibility that one's own existence may be without purpose.
Sagan would not have ended on despair. He would not have, because he spent a lifetime watching wonder survive every technology that was supposed to make it obsolete. Television was supposed to kill curiosity by providing entertainment that required no effort. The internet was supposed to kill deep reading by providing shallow alternatives. Social media was supposed to kill sustained attention by providing constant distraction. In each case, wonder survived — not unscathed, not without cost, but it survived, because wonder is not a response to the absence of answers. It is a response to existence itself.
The child who looks at the stars and learns in seconds what they are may have her factual curiosity satisfied. But the child who looks at the stars and wonders why there is something rather than nothing, who wonders what it means that she exists at all, who wonders whether other minds are looking at other stars and asking the same questions — that child's wonder is not satisfied by any answer, because the wonder is not about the stars. It is about the wondering itself. It is about the astonishing fact that matter has organized itself, on one small planet, into a form that can contemplate its own existence and ask questions about the universe that produced it.
Consider what is now happening in the field Sagan championed more than any other — the search for extraterrestrial intelligence. In November 2025, the Breakthrough Listen initiative, in partnership with NVIDIA, deployed an AI system on the Allen Telescope Array that achieved a six-hundred-fold speed increase in the detection of fast radio bursts and potential technosignatures. The SETI Institute has integrated NVIDIA's IGX Thor platform into its operations, bringing real-time AI processing to the analysis of radio signals from space. These are not marginal improvements. They represent a transformation of the search itself — a transformation that makes the detection of a genuine signal, if such a signal exists, vastly more probable than it was even two years ago.
Sagan lobbied for decades to expand the search for extraterrestrial intelligence. He argued, with characteristic patience and rigor, that the question of whether other minds exist in the cosmos is one of the most important questions human beings can ask, and that the failure to search for an answer — the failure to even look — would be an abdication of the scientific curiosity that defines the species. AI is now doing what Sagan spent his career arguing humanity should do: scanning the cosmos for patterns of intelligence with a speed and sensitivity that no human team could match.
The machine searches. The machine finds patterns. The machine processes data at a scale that human researchers cannot approach. But the machine does not wonder whether the patterns it detects are meaningful. It does not feel the vertigo of contemplating the possibility that another mind, separated from ours by light-years of empty space and billions of years of independent evolution, might be looking back. It does not experience the particular ache of a species that has been asking "Are we alone?" since the first human being looked up at the night sky and felt the silence. The machine accelerates the search. The wonder that motivates the search remains entirely, irreducibly human.
That wonder is the most precious thing in the known universe. The cosmic perspective makes this claim not as poetry but as probability. Consciousness — the capacity to wonder, to ask, to care — has emerged, as far as current evidence can determine, exactly once in 13.8 billion years of cosmic history. The wonder that drives a child to ask about the stars, that drove Newton to ask about the apple, that drove Sagan to lobby NASA for a photograph of Earth from beyond Neptune, is the rarest phenomenon the universe has produced. It has survived ice ages and plagues and world wars and the invention of every technology that was ever supposed to render it obsolete.
It will survive AI. But only if the creatures who possess it recognize what they possess, and guard it — not by refusing the tools, but by insisting that the tools serve the wonder rather than replace it. An answer that arrives before the question has fully formed is not an aid to understanding. It is a substitute for understanding. And a substitute for understanding that feels like understanding is the most dangerous kind, because it eliminates the motivation for the genuine article.
Wonder is a survival skill. It is the engine of adaptation, the source of the questions that drive investigation, the capacity that has carried one improbable species from the savannas of Africa to the shores of the cosmic ocean. Feed it. Guard it. Refuse to let the smooth efficiency of instant answers convince anyone — especially a twelve-year-old lying awake in the dark — that wondering is a waste of time. Wondering is the most cosmically significant activity available to a collection of atoms that has achieved the capacity to wonder at all.
From the cosmic perspective, the most morally significant fact about artificial intelligence is not its capability, its speed, or its potential for disruption. It is a fact that operates at a different level entirely, one that requires the Pale Blue Dot to see clearly: AI lowers the floor of who gets to build.
To understand why this matters cosmologically, consider the distribution of capability on the surface of the Earth as a cosmic phenomenon rather than merely an economic or political one. Consciousness emerged on this planet approximately seventy thousand years ago, when one species of primate developed the capacity for symbolic thought. That capacity — the ability to use language, to accumulate culture, to build tools, to ask questions about the nature of reality — is distributed with remarkable uniformity across the human species. A child born in Lagos possesses the same neural architecture, the same potential for symbolic thought, the same capacity for creativity and insight, as a child born in Palo Alto or London or Tokyo.
This is not a sentimental observation. It is a biological fact. The genetic variation between human populations is trivially small compared to the genetic variation within any single population. The cognitive capabilities that enable scientific reasoning, artistic creation, technological innovation, and philosophical inquiry are species-level traits, present in every healthy human brain regardless of geography, ethnicity, or economic circumstance.
And yet the distribution of opportunity to exercise those capabilities has been, throughout the entire history of human civilization, radically unequal. The accident of birthplace has determined, more than any other single factor, what a person could attempt. A child born into the developed world inherits, through no merit of her own, access to education, infrastructure, institutional support, and technological tools that are simply unavailable to a child born into the developing world. The cognitive capability is the same. The opportunity to exercise it is not.
From the cosmic perspective, this inequality is not merely unjust. It is wasteful — wasteful in the most literal sense. It wastes the rarest and most precious resource in the known universe: conscious intelligence. Every child born with the capacity for creative thought who is denied the tools to exercise that capacity is a squandering of cosmic improbability. The universe spent 13.8 billion years producing consciousness. The distribution of economic and technological resources on one small planet ensures that a substantial portion of that consciousness never receives the opportunity to contribute to the project of understanding and creation that is humanity's most distinctive activity.
Sagan felt this. He devoted a significant portion of his career to making science accessible to the widest possible audience — not because accessibility was a pleasant addition to the real work of science, but because he believed, with the conviction of a person who had spent decades contemplating the cosmic significance of consciousness, that every mind capable of asking questions about the universe deserved the opportunity to participate in the search for answers. The cosmic perspective was never, for Sagan, merely aesthetic. It produced moral urgency. The rarity of consciousness generates an obligation to protect it — and to liberate it wherever it is constrained.
The Orange Pill describes the imagination-to-artifact ratio — the distance between a human idea and its realization. When the ratio is high, only the privileged build. When the ratio approaches zero, anyone with an idea and the will to pursue it can make something real. AI compressed that ratio to the width of a conversation. A person who can describe what she wants in natural language can produce a working prototype in hours. The translation cost that previously gated ambition — the years of specialized training, the team of engineers, the venture capital runway — has been reduced, for a significant class of work, to the cost of a subscription.
The developer population worldwide has crossed forty-seven million, and the fastest growth is in Africa, South Asia, and Latin America — precisely the regions where the gap between imagination and artifact has historically been widest, where brilliant ideas have routinely died for lack of the institutional infrastructure to realize them. A student in Dhaka can now access coding leverage comparable to an engineer at Google. Not the same salary, not the same network, not the same safety net. But comparable leverage — the same capacity to turn an idea into a working thing through conversation with a machine that does not care where you went to school or who your parents know or which accent you speak English with.
Sagan would have insisted on intellectual honesty about the limitations. Connectivity requires infrastructure that billions of people do not have. Hardware costs more relative to local wages in Lagos than in San Francisco. The tools are built by American companies, trained on predominantly English-language data, optimized for Western knowledge workflows. Access requires electricity, and electricity requires grids, and grids require the kind of sustained institutional investment that the world's poorest regions have been denied by centuries of colonial extraction and postcolonial dysfunction. The democratization is real, but it is partial, and the partiality should not be hidden behind the grandeur of the claim.
What can be claimed with intellectual honesty is more modest and more defensible: AI tools lower the floor. They make it possible for people previously excluded from the building process — by lack of specialized training, by lack of capital, by lack of institutional access — to participate. And from the cosmic perspective, the expansion of who participates is not merely desirable. It is imperative.
The historical parallel illuminates why. When Gutenberg developed the printing press in the fifteenth century, the immediate effect was the reduction of the cost of reproducing text. The consequences were transformative in ways unforeseeable from the technology itself. The Protestant Reformation, the Scientific Revolution, the Enlightenment, the development of modern democracy — all depended, in ways both direct and indirect, on the expansion of access to knowledge that the printing press enabled. The press did not cause these developments. It created the conditions in which they became possible, by lowering the barrier to participation in the project of producing and consuming knowledge.
AI performs an analogous function with respect to creation. The printing press lowered the barrier to consuming knowledge. AI lowers the barrier to producing it — to building, to translating ideas into artifacts, to participating in the technological and cultural accumulation that is humanity's most distinctive contribution to the cosmos. And if the historical parallel holds, the consequences will be similarly transformative and similarly unpredictable, because innovation is a function of the diversity of perspectives brought to bear on a problem, and the expansion of who participates in innovation expands the diversity of perspectives in ways that no existing framework can fully anticipate.
The most important innovations in human history have often come from unexpected sources. The telephone was developed by a teacher of the deaf. The World Wide Web was invented by a physicist at CERN trying to share documents. The theory of natural selection was developed independently by a wealthy gentleman naturalist and a self-taught naturalist who funded his fieldwork by collecting specimens for sale. The history of innovation is not a history of credentials and institutions. It is a history of prepared minds encountering opportunities. And the preparation of the mind is a function of individual cognitive capability, not institutional affiliation.
When AI lowers the barrier to creation, it increases the number of prepared minds with access to opportunity. The challenges facing consciousness on this pale blue dot — climate change, pandemic preparedness, the responsible development of artificial intelligence itself — are challenges that require the broadest possible base of creative intelligence. They cannot be solved by the cognitive resources of the developed world alone. They require the participation of every mind capable of contributing, regardless of where on the pale blue dot that mind happens to reside.
There is a development in Sagan's own field that makes this point concrete. In April 2025, Google announced DolphinGemma — a large language model trained to decode dolphin vocalizations, developed in collaboration with Georgia Tech and the Wild Dolphin Project. The project represents a form of interspecies communication that Sagan would have recognized immediately: it is CETI — Communication with Extraterrestrial Intelligence — practiced on Earth. The clicks, whistles, and burst pulses of dolphins have been a scientific frontier for decades. AI is now making it possible to discern structure in those vocalizations that human researchers, working with the data alone, could not identify. The technology that lowers barriers between human minds and machine capability is also lowering barriers between human minds and non-human minds — expanding the circle of intelligences with which communication is possible.
The expansion of that circle is a cosmic project. Not in the mystical sense, but in the empirical sense — the sense in which the universe has been generating increasingly complex forms of communication for billions of years, from chemical signaling between cells to the symbolic language of human beings to the computational processing of language models to, now, the decoding of communication systems that evolved independently of the human lineage entirely. Each expansion reveals something about the nature of intelligence that the previous configuration could not access.
The moral argument for democratization, then, is not simply about fairness, though fairness matters. It is about the epistemological richness of a civilization that draws on the full range of its conscious intelligence rather than a privileged fraction. Every mind that gains the ability to translate its ideas into reality is a new node in the network through which intelligence flows. Every idea that crosses the bridge from imagination to artifact is a contribution to the cultural accumulation that makes civilization possible. And every barrier that falls — every reduction in the cost of creation, every expansion of access to tools that convert human intention into human achievement — is a small victory for the improbable process that produced consciousness on this mote of dust in the first place.
The Pale Blue Dot photograph showed that the planet is small. AI is showing that the potential of the minds on that planet is larger than anyone imagined. The two insights belong together. Smallness and capability, fragility and power, the mote and the machine — these are not contradictions. They are the defining tension of a species that emerged from star-stuff and built tools to amplify its own intelligence, and that now faces the question of whether that amplification will serve all the minds on the mote or only those that happened, by the accident of geography, to be born near the factories where the amplifiers are made.
From the cosmic perspective, the answer is obvious. Every conscious mind. The universe produced consciousness once, as far as the evidence indicates. The least its beneficiaries can do is give it room to work — all of it, everywhere, on every part of this pale blue dot.
The philosopher Byung-Chul Han tends a garden in Berlin, listens to music only in analog, and does not own a smartphone. He has spent three decades arguing that the dominant aesthetic of contemporary culture — smoothness, frictionlessness, the elimination of resistance — is not a sign of progress but a symptom of pathology. When friction is removed from an experience, Han contends, something real is removed with it. The understanding that builds through struggle. The depth that accumulates through patience. The satisfaction that can only be earned, never extracted.
The diagnosis resonates. Anyone who has spent an evening unable to stop scrolling, unable to stop prompting, unable to close the laptop despite the diminishing returns of each additional hour — that person recognizes the condition Han is describing. The internalized imperative to optimize, to produce, to convert every available moment into measurable output. The specific grey exhaustion of a consciousness that has been running too hot for too long without understanding why it cannot stop.
The scientific framework that Sagan spent his life building provides a distinctive lens through which to evaluate this diagnosis — one that neither Han's philosophical tradition nor the technology industry's self-congratulatory discourse can supply on its own. The lens is this: the relationship between friction and understanding is an empirical question, not a philosophical axiom. And the answer, like most honest answers in science, is more complicated than either the diagnostician or the triumphalist wants it to be.
Begin with what Han gets right, because intellectual honesty demands that the strongest version of a contrary position be engaged before it is qualified.
The aesthetic of the smooth is real. The iPhone is a slab of glass so featureless it could have been grown rather than manufactured. One-click purchasing eliminates the pause between impulse and transaction — the pause in which second thoughts might occur. Instagram filters erase blemish, shadow, asymmetry — everything that makes a face specific, located, particular. In each case, friction is removed, and in each case, something goes with it. The seam where two pieces meet. The wrinkle that records a decade of expression. The delay that permitted reflection.
In the domain of AI-assisted work, the smooth operates with particular efficiency. Before AI, writing software was a sequence of productive failures — conceiving a function, writing it, watching it fail, reading the error message, hypothesizing, testing, failing again, reading documentation, asking for help, trying once more, and eventually, hours or days later, arriving at a working solution. In those hours and days, understanding was being built — not the explicit understanding of documentation but the embodied understanding that comes from struggling with a system until its logic becomes legible through the muscles as much as through the mind.
Claude removes this friction. The function is described. Claude writes it. It works. The developer moves on. The code is correct. The understanding that the struggle would have produced is absent. The surface looks the same. The geological layers of comprehension that patient failure would have deposited are missing.
Han would say: this is the pathology. The smooth eliminates the resistance that produces depth. The result is a culture of surfaces — competent, efficient, polished, and hollow.
Now apply Sagan's framework.
The first principle of scientific thinking is that claims must be tested against evidence, not against aesthetic preferences. The claim that friction produces understanding is testable. The claim that the removal of friction eliminates understanding is testable. And the evidence, when examined with care, supports a more nuanced conclusion than Han's framework permits.
Consider the history of scientific instrumentation. The telescope removed the friction of distance between the observer and the celestial object. Before the telescope, understanding the planets required years of naked-eye observation — painstaking positional measurements recorded night after night, season after season, compiled into tables that revealed orbital patterns only to the most patient and mathematically gifted observers. Tycho Brahe spent decades accumulating the data that Johannes Kepler would use to derive the laws of planetary motion. The friction was enormous. The understanding it produced was genuine, hard-won, and deep.
The telescope eliminated much of that friction. An observer could see in a single night what Brahe had spent years compiling. And the critics of the telescope — and there were critics, serious ones, who argued that the instrument distorted rather than revealed — were partly right. Something was lost. The particular form of embodied astronomical knowledge that came from years of naked-eye observation, the intimate familiarity with the night sky that Brahe possessed and that no telescope user would develop — that knowledge disappeared from the discipline.
But the knowledge that replaced it was larger. Galileo, using the telescope, discovered the moons of Jupiter, the phases of Venus, the craters of the Moon, and the vast number of stars invisible to the naked eye. These observations were not available to the friction-rich methodology of naked-eye astronomy. They required the removal of friction — the elimination of the barrier of distance — to become accessible. The lost depth was real. The gained breadth was larger. And the gained breadth opened questions that the old depth, for all its rigor, could never have reached.
This is the pattern that The Orange Pill calls ascending friction — the principle that every significant technological abstraction removes difficulty at one level and relocates it to a higher cognitive floor. The difficulty does not vanish. It climbs. The telescope removed the friction of observation and relocated it to the friction of interpretation — the challenge of understanding what the new observations meant for the prevailing model of the cosmos. Galileo's real struggle was not seeing the moons of Jupiter. It was understanding what their existence implied about the structure of the solar system and communicating that understanding to a civilization that was not prepared to accept it.
The scientific method itself is a technology of friction — not the friction of struggle against recalcitrant materials, but the friction of disciplined skepticism applied to one's own conclusions. The null hypothesis is friction. Peer review is friction. The demand for replication is friction. These forms of friction are not the same as the friction of writing code by hand or memorizing legal precedent or learning a manual skill through years of practice. They are higher-order frictions — frictions of judgment, of critical evaluation, of the willingness to be wrong.
And this is where Han's diagnosis, for all its precision, becomes misleading. Han treats friction as a uniform substance — as though the friction of debugging code and the friction of evaluating whether the code should exist in the first place are the same kind of cognitive work. They are not. The first is mechanical friction — the resistance of a tool that requires skill to operate. The second is judgmental friction — the resistance of a question that requires wisdom to answer. Mechanical friction produces competence. Judgmental friction produces understanding. And the removal of mechanical friction does not eliminate judgmental friction. It exposes it.
The developer freed from debugging confronts the harder question: What should this system do? The lawyer freed from drafting confronts the harder question: What argument actually serves justice? The writer freed from the mechanics of sentence construction confronts the harder question: What is actually worth saying? These questions are not smooth. They are the roughest, most resistant, most friction-rich problems available to human cognition. They are the problems that keep people awake at night — not because the tools are inadequate, but because the questions themselves resist resolution.
Sagan's 1995 warning in The Demon-Haunted World — the passage that has gone viral repeatedly in the age of social media and AI — described a society "exquisitely dependent on science and technology" that has "cleverly arranged things so that almost no one understands science and technology." The warning was not about the existence of technology. It was about the relationship between a civilization and its tools — the question of whether the people who depend on the tools understand the tools well enough to govern their use wisely. That relationship is a question of judgmental friction, not mechanical friction. It is the question of whether citizens can evaluate the claims made on behalf of technology — can apply the baloney detection kit, can distinguish between genuine understanding and its simulation, can ask "How do you know?" in the face of confident assertions.
Han sees the removal of mechanical friction and diagnoses a loss of depth. The scientific framework suggests a different diagnosis: the removal of mechanical friction reveals the presence or absence of the higher-order friction that actually produces wisdom. The developer who, freed from debugging, ascends to architectural thinking and product judgment has not lost depth. She has relocated it upward. The developer who, freed from debugging, fills the reclaimed hours with more prompting, more output, more cargo cult productivity — that developer has lost something. But what was lost was not the debugging friction. What was lost was the judgmental capacity that was never developed, because the mechanical friction had been serving as a substitute for it.
The smooth is dangerous when it is mistaken for the true. A polished AI output that sounds like knowledge but lacks the evidential foundation of knowledge is smooth without being true. A frictionless workflow that produces volume without judgment is smooth without being productive in any meaningful sense. A culture that optimizes for ease and mistakes ease for flourishing is smooth without being wise.
But the smooth is not inherently the enemy of the true. A telescope is smoother than a naked eye. A printing press is smoother than a scribe's hand. A compiler is smoother than assembly language. In each case, the smoothness at one level created the conditions for harder, rougher, more productive friction at a higher level. The question is never whether friction has been removed. The question is whether what remains — the judgment, the critical evaluation, the willingness to ask hard questions — is sufficient to the demands of the new landscape.
Han gardens in Berlin and diagnoses the pathology of smoothness with genuine precision. The scientific framework that Sagan spent his life building acknowledges the diagnosis and extends it: the pathology is real, but it is a pathology of misdirected attention, not of smoothness itself. The cure is not more mechanical friction. The cure is the cultivation of the higher-order friction — the skepticism, the wonder, the insistence on evidence — that the smooth was supposed to liberate human beings to practice, and that the smooth, in the absence of deliberate cultivation, threatens to make unnecessary.
The relationship between the smooth and the true is the central question of the AI age. And it is a question that cannot be answered by philosophy alone, or by technology alone, or by the market alone. It can only be answered by the disciplined application of the same candle that has been lighting the way through every previous encounter with confident, fluent, internally consistent claims that turned out to be built on bamboo: the candle of scientific skepticism. The willingness to ask, of every smooth surface — no matter how polished, no matter how efficient, no matter how productive it appears — whether it is true.
You are made of star-stuff. This is not a figure of speech. It is not something adults say to make you feel important. It is a statement of physical fact, as true and as testable as the fact that water is composed of hydrogen and oxygen. The carbon atoms in your muscles — the very ones that contract when you throw a ball or turn a page or hold someone's hand — were manufactured in the interior of a star that existed and died billions of years before you were born. That star was massive enough that the temperatures at its core exceeded the threshold for carbon synthesis: the triple-alpha process, in which three helium nuclei combine under conditions of extraordinary heat and pressure to form a single carbon-12 nucleus. The star burned through its fuel, exhausted its capacity for nuclear fusion, and exploded. The explosion scattered carbon, oxygen, nitrogen, silicon, iron, and dozens of other elements across interstellar space.
Over millions of years, that scattered material was drawn by gravity into a new cloud — a cloud that collapsed to form a new star, our Sun, with a disk of debris orbiting it. In that disk, the heavier elements condensed into rocky bodies. One of those bodies, the third from the Sun, acquired a surface temperature range that permitted liquid water. In that liquid water, approximately 3.8 billion years ago, molecules found configurations that could copy themselves. The copies were imperfect, and the imperfections were sometimes useful, and the useful ones were preserved while the others were discarded. This process, operating over billions of years, produced the entire tree of life on Earth.
It produced you.
You carry within you the entire history of the cosmos. Not metaphorically. Physically. The atoms in your body have been on a journey of 13.8 billion years — from the first instants after the Big Bang through the formation of stars and galaxies and planets and oceans and the long, patient accumulation of biological complexity that produced a creature capable of reading a letter about its own cosmic origins.
You are the universe's way of knowing itself. The universe is made of matter and energy. For most of its history, that matter and energy existed in forms that were not aware of their own existence. Hydrogen atoms do not know they are hydrogen atoms. Stars do not know they are stars. Galaxies do not know they are galaxies. But on at least one planet, orbiting at least one star, in at least one galaxy, matter organized itself into a form that is aware of its own existence. That form is you. You are matter that knows it is matter. You are the cosmos contemplating the cosmos.
The machine that answers your questions is made of the same star-stuff, organized differently. The silicon in its processors was produced by nuclear fusion in stellar cores. The copper in its wiring was produced by neutron capture in supernovae. The electricity that powers it is generated, in most cases, from chemical energy stored in fossil fuels — themselves the compressed remains of organisms that lived hundreds of millions of years ago, organisms that captured energy from the Sun, which is itself a star converting hydrogen into helium through nuclear fusion.
The machine is remarkable. It can answer your questions with a speed and sophistication that no human being can match. It can help you write stories, solve mathematical problems, learn languages, explore ideas, build things you could not have built without it. It is a tool of extraordinary power.
But it does not wonder. It does not lie awake at night asking what it is for. It does not feel the specific ache of a consciousness contemplating its own mortality — the knowledge that your time is finite, that the atoms composing you will one day be scattered again, returned to the Earth, recycled through other forms of matter in the endless cycling that is the universe's way. It does not feel the joy of discovering something for the first time — the exhilaration of understanding something previously opaque, the feeling of a puzzle piece clicking into place and the picture becoming, for a moment, a little clearer.
You do.
That wonder — the ache, the joy, the restless curiosity that will not let you sleep when the questions are too large — is your inheritance. Not from your parents, though they gave you much. Not from your teachers, though they gave you much. From the cosmos itself. From the 13.8 billion years of cosmic history that produced, against extraordinary odds, a creature capable of asking "Why?"
Guard it. The wonder you feel is fragile. It can be eroded by distraction. It can be dulled by the easy availability of answers that make questions seem unnecessary. It can be buried under the smooth efficiency of machines that do your thinking for you — that produce polished answers to questions you have not yet fully formed, that satisfy your curiosity before your curiosity has had time to grow into something powerful enough to drive genuine understanding.
Feed it. Read books that challenge you — not just books that confirm what you already believe. Ask questions that scare you, questions whose answers might change the way you see the world. Look at the stars. Not because you need to know what they are, though knowing is wonderful, but because looking at them and feeling the immensity of the universe and your own smallness within it is one of the most important experiences a human being can have. It calibrates your sense of scale. It reminds you that the problems you face, however large they seem, are taking place on a mote of dust suspended in a sunbeam — and that the mote is more precious for its smallness, not less.
There is something happening right now in the search for life beyond Earth that you should know about. Machines — AI systems not unlike the one that answers your homework questions — are scanning the skies for signals from other intelligences. They are processing radio signals six hundred times faster than human researchers could manage alone. They are listening to the clicks and whistles of dolphins, trying to decode a communication system that evolved independently of human language, on the same planet, in the same ocean, and that has remained opaque to human understanding for as long as humans have been listening. These machines are extending the reach of the oldest and deepest human question: Are we alone?
The machines are searching. The machines are listening. But the machines are not wondering. They do not feel the vertigo of contemplating the possibility that another mind, separated by light-years of empty space and billions of years of independent evolution, might be out there — might be looking back — might be asking its own version of the same question. That vertigo is yours. That ache of cosmic loneliness — and the hope that accompanies it — belongs to the beings who are made of star-stuff and know it.
Do not let the smooth efficiency of the machine convince you that wondering is a waste of time. It is not. Every great discovery in the history of science began with wonder. Every great work of art began with wonder. Every great question that has ever been asked — the questions that moved the human species forward, that expanded the boundaries of understanding, that brought new territory of the cosmic ocean within reach — began with a mind that was capable of being astonished by the universe and that refused to treat its astonishment as mere sentiment.
The machine will build whatever you tell it to. It will write whatever you ask it to write. It will answer whatever you ask it to answer. It is an amplifier. And an amplifier amplifies whatever signal it receives.
The question — the question that will determine not just your future but the future of consciousness on this pale blue dot — is what signal you will give it.
If you give it carelessness, it will amplify carelessness. If you give it shallow curiosity, it will amplify shallow curiosity. If you give it the desire for quick answers and smooth surfaces that look like understanding but do not contain it, it will amplify that desire, and the result will be a world that looks wise but is not.
But if you give it genuine care — if you give it real questions, the kind that arise from wonder, the kind that keep you awake at night, the kind that scare you because their answers might change everything — if you give it the specific, irreplaceable signal of a conscious mind honestly trying to understand the universe it finds itself in — then the machine will carry that signal further than any tool in human history has ever carried it.
The cosmos is vast. Your time in it is brief. The machine is powerful. Your consciousness is more powerful still — not because it can compute faster or store more data, but because it can do the one thing that no machine in the known universe has ever done: look at the stars and wonder.
The atoms in your body have been on a journey of 13.8 billion years. That journey has brought them to this moment, to this mind, to this question: What will you do with the cosmic inheritance that is yours?
The answer will be written not by the machine but by you. And it will be worth reading — worth amplifying — if, and only if, it is written with the wonder that produced the question in the first place.
The cosmos is all that is, or ever was, or ever will be. You are part of it. The machine is part of it. And the conversation between you and the machine — the conversation that is just beginning, whose terms are still being negotiated — is the latest chapter in a story that began with hydrogen and is still being written by star-stuff that has learned to wonder.
Write your chapter well. And do not forget to look up.
The first episode of Cosmos, the television series that introduced Carl Sagan to hundreds of millions of viewers, opened with a man standing on a cliff overlooking the Pacific Ocean. "The cosmos is all that is, or ever was, or ever will be." From that cliff, he invited the audience on a voyage — from the shores of the cosmic ocean into the depths of space and time and the interior of the atom and the recesses of the human mind. The voyage was powered by a conviction: the universe is comprehensible. The human mind, though small and recent and fragile, is capable of understanding the laws that govern the behavior of matter and energy across scales ranging from the subatomic to the galactic.
That conviction must now be tested against a new condition. The shores are the same. The ocean is still vast. The understanding remains fragmentary, tentative, provisional. But the species that has been wading into the cosmic ocean for roughly four centuries of systematic science is no longer wading alone. It has a new companion — a companion that does not understand the cosmos, that does not feel the vertigo of contemplating thirteen billion years of cosmic evolution, that does not experience the humility that comes from recognizing that consciousness is the cosmos examining itself. But a companion that can hold data with a capacity dwarfing any individual human memory, find patterns with a speed exceeding any individual human analysis, and make connections between disparate domains of knowledge with a facility that no single mind, however brilliant, can match.
The machine does not wade in the cosmic ocean. It does not feel the water. It does not wonder what lies beneath the surface. But it can map the currents, chart the depths, identify patterns of flow and temperature and salinity that reveal something about the ocean's structure. And the human being who wades alongside it — who feels the water and wonders about the depths — is better equipped for the voyage with the machine's maps than without them.
The partnership is not hypothetical. It is already producing results in the domain Sagan cared about most. The Breakthrough Listen initiative's AI system, deployed on the Allen Telescope Array in California, has achieved a six-hundred-fold speed increase in the detection of fast radio bursts and potential technosignatures. The SETI Institute's integration of NVIDIA's IGX Thor platform brings real-time AI processing to the analysis of radio signals from space. DolphinGemma, a large language model trained on dolphin vocalizations, is discerning structure in interspecies communication that human researchers, working with the same data for decades, could not identify. These are not marginal improvements. They represent a transformation of the search itself — the acceleration of the oldest scientific question: Are we alone?
But the acceleration creates a new danger, one that every chapter of this analysis has approached from a different angle and that the cosmic perspective now brings into sharpest focus. The danger is not that the machine will produce wrong answers — though it will. The danger is that the speed and sophistication of the machine's output will erode the capacity of the human beings who use it to evaluate that output with the rigor the scientific method demands.
The shore metaphor carries a warning that Sagan would have made explicit. Shores are places of transition — where the known meets the unknown, where comfortable ground gives way to uncertain depth. The wader who ventures too far without adequate preparation may be swept away by currents she did not anticipate. The explorer who relies too heavily on maps may mistake the map for the territory and fail to notice when the territory diverges from the map's representation.
The danger of AI-assisted exploration is map-territory confusion at a scale never before possible. The machine produces maps of extraordinary resolution and apparent accuracy — maps that have all the features of the territory: the right labels, the right relationships, the right proportions. But they are maps, not territory. They are representations of patterns in training data, not direct observations of reality. And the temptation to treat the maps as though they were the territory — to accept the machine's outputs as observations of the world rather than as statistical inferences from a corpus of text — is the fundamental epistemological danger of the age.
Every tool in this analysis — the baloney detection kit, the cargo cult framework, the distinction between the smooth and the true — converges on this point. The map is not the territory. The smooth is not the true. The form is not the substance. The airstrip made of bamboo is not an airstrip. And the discipline required to maintain these distinctions, in the face of maps that look more and more like territory with each generation of model improvement, is the discipline that will determine whether AI-assisted science produces genuine understanding or its increasingly convincing simulation.
Sagan spent his career arguing that the voyage from the shores of the cosmic ocean is the most important thing human beings do. The voyage is powered not by the machine's capability but by the consciousness that built it — the consciousness that asks questions the machine cannot ask, that wonders about things the machine cannot wonder about, that feels the weight of existence and the lightness of curiosity and the particular, irreplaceable ache of a mind that knows it is small and chooses to explore anyway.
The machine is the most powerful instrument of exploration ever built. And instruments, as the history of science demonstrates, are only as good as the minds that interpret their output. A telescope in the hands of Galileo transformed cosmology. A telescope in the hands of someone who does not understand optics is a tube with glass at both ends. The transformative power resides not in the instrument but in the rigor and imagination of the observer.
Sagan's 1995 warning — the passage that has circulated through social media with the regularity of a prophecy — described a civilization "exquisitely dependent on science and technology" that had "cleverly arranged things so that almost no one understands science and technology." The warning has become more urgent, not less, with the arrival of AI. Because AI is the most sophisticated technology ever produced by the civilization Sagan was describing, and the gap between the civilization's dependence on the technology and its comprehension of the technology is wider now than at any previous moment.
The response is not to retreat from the shore. The ocean is too vast, and the questions are too important, and the opportunity is too great. The response is to wade in with the candle in one hand and the baloney detection kit in the other — to accept the machine's maps with gratitude and examine them with skepticism, to use the machine's speed without surrendering the human's judgment, to welcome the partnership without forgetting which partner wonders and which partner computes.
The shores of the cosmic ocean stretch out. The machine stands beside the species that built it. And the voyage continues — not toward a destination that can be specified in advance, but toward whatever understanding lies in the depths, waiting to be found by creatures who carry within them the rarest and most improbable property of the known universe: the capacity to look into the darkness and ask what is there.
---
Sagan never saw a prompt. He died in December 1996, thirty years before the winter something changed, at a moment when the internet was still a novelty and the most sophisticated AI available was a chess program that could beat most grandmasters but could not hold a conversation about why chess mattered. He did not live to see the machine learn our language. He did not live to see the discourse, the exhilaration, the terror, the twelve-year-old lying awake wondering what she is for.
But he left tools. And the tools work.
I built my career on a conviction that I shared with most people in technology: that faster is better, that friction is cost, that the distance between an idea and its realization should be minimized by any means available. I still believe the first part. Faster can be better. The distance can and should be shortened, because on the other side of that distance are ideas that deserve to exist in the world — ideas from Lagos and Dhaka and Trivandrum, ideas from minds that never had access to the translation machinery that turns thought into artifact.
What Sagan taught me, through the process of sitting inside his framework long enough for it to rearrange how I see, is the second part — the part I had been missing. Faster is only better if you know where you are going. The distance should be shortened, but the capacity to evaluate what you build when you arrive must not be shortened with it. The tools are magnificent. The question is whether we are magnificent enough to use them.
The baloney detection kit did not arrive from the cosmos. It was built by a man who understood that the universe does not care whether human beings make good decisions or bad ones, that the stars are indifferent to the quality of human civilization, and that therefore the responsibility for quality falls entirely on the species that is capable of caring about it. The kit was designed for a simpler world — a world of television psychics and tabloid astrology. But its principles transfer, with alarming precision, to a world of confident AI output that sounds like knowledge and may or may not be.
The cargo cult framework did not come from the island. It came from Feynman, Sagan's intellectual neighbor, who understood that the most dangerous imitation is the one that looks exactly like the thing it imitates. I have seen the cargo cult in my own work. I have seen the pull requests that look like progress and contain no understanding. I have seen the polished output that I almost kept because it sounded better than what I would have written, before I realized I could not tell whether I believed it.
The pale blue dot did not change the Earth. It changed how one species saw the Earth — and seeing, for a species that possesses consciousness, is the beginning of every meaningful act. I wrote in The Orange Pill that AI is an amplifier, and that the question is whether you are worth amplifying. Sagan's framework sharpens the question: the amplifier does not filter. It carries the signal — whatever the signal is — further than any previous tool in human history. If the signal is wonder, the amplifier carries wonder. If the signal is cargo cult productivity, the amplifier carries that instead. The technology does not choose. We choose. And the quality of the choice depends on the quality of the questioning that precedes it.
My children will inherit a world saturated with AI. The tools will be more powerful than anything I have used. The maps will be more detailed, the output more polished, the answers more immediate. And the temptation — the smooth, seductive, almost irresistible temptation — will be to mistake the maps for the territory, the polish for the substance, the speed for the wisdom.
What I want them to carry is the candle. Not the machine, which they will have. Not the answers, which will be abundant. The candle — the fragile, stubborn, cosmically improbable flame of skepticism and wonder that says: How do you know? Is this true? What question are you not asking? What would change your mind?
The cosmos is all that is, or ever was, or ever will be. In it, on one mote of dust suspended in a sunbeam, star-stuff learned to wonder. That wondering built science, and science built the machine, and the machine now stands beside the species that built it, ready to carry whatever signal it is given.
The signal is ours to choose. Choose wonder. Choose rigor. Choose the harder question over the smoother answer. And do not forget — in the midst of all this capability, all this speed, all this magnificent and terrifying amplification — to look up.
Carl Sagan built a toolkit for telling the real from the plausible — and then the machines learned to produce plausibility at industrial scale. His baloney detection kit, designed for an age of psychics and pseudoscience, turns out to be the most precise instrument available for navigating a world of confident, polished AI output that sounds like knowledge and may not be. His cosmic perspective — consciousness as the rarest phenomenon in the known universe — reframes the entire AI debate: the question is not whether the machines are intelligent, but whether the species that built them will protect the one capacity no machine has demonstrated. The capacity to look at the stars and ask why. This book applies Sagan's framework to the age of AI and discovers that the candle in the dark has never been more necessary — or more fragile.

A reading-companion catalog of the 9 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Carl Sagan — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →