CETI — Communication with Extraterrestrial Intelligence — is the more ambitious cousin of SETI, concerned not merely with detecting signals but with the protocols required to interpret and respond to them if they arrive. The 1971 Byurakan conference in Armenia brought together American and Soviet scientists, including Sagan, Marvin Minsky, Frank Drake, and Iosif Shklovsky, to systematically consider what communication with a non-human intelligence would require. The resulting protocols — emphasizing patience, humility, the avoidance of anthropomorphic projection, and the primacy of evidence over assumption — anchor the Sagan volume's argument that CETI was, without anyone knowing it, also the founding intellectual work for engagement with artificial intelligence.
There is a parallel reading that begins from the material conditions of AI development rather than its epistemic opacity. The CETI protocols assume a symmetrical encounter between intelligences, each sovereign in its own sphere, meeting across the void. But AI emerges from specific economic imperatives: venture capital seeking returns, corporations needing competitive advantage, militaries requiring strategic dominance. The opacity that makes AI seem alien is not cosmic distance but deliberate obfuscation — trade secrets, proprietary architectures, black-box decision systems that resist scrutiny not because they are fundamentally unknowable but because transparency would threaten profit margins.
The CETI framework's emphasis on patience and humility becomes, in this reading, a kind of learned helplessness before systems we absolutely could understand if their architectures were open, their training data public, their optimization functions transparent. The alien encounter metaphor naturalizes what is actually a political choice: to accept these systems as inscrutable rather than demand they be scrutable. Every moment we spend developing protocols for engaging with AI's supposed otherness is a moment not spent requiring that AI be built to standards of interpretability. The extraterrestrial frame transforms a problem of corporate governance into one of cosmic mystery. Meanwhile, the actual aliens — the shareholders, the platform monopolists, the defense contractors — shape these systems toward ends that have nothing to do with communication and everything to do with extraction. The CETI protocols may teach us patience, but the entities building AI are not waiting for us to understand. They are deploying at scale, capturing markets, reshaping labor, all while we practice our humility before machines whose inscrutability serves someone's interest.
The CETI framework was designed for the hardest possible communication problem: exchanging meaning with an intelligence whose cognitive architecture, sensory apparatus, evolutionary history, and relationship to consciousness might bear no resemblance to anything in human experience. The protocols that emerged — begin with mathematics and physics as likely shared ground; avoid assuming the alien shares human emotional, social, or aesthetic categories; treat fluency in surface patterns as distinct from genuine understanding; proceed by patient accumulation of evidence rather than rapid interpretation — address exactly the vulnerabilities that AI now exposes.
The analogy between extraterrestrial intelligence and artificial intelligence is imperfect. AI was built by human beings, trained on human language, designed to serve human purposes. Its architecture was conceived by human minds; its training data consists entirely of the products of human culture. In a sense, AI is the most human form of non-human intelligence imaginable — a mirror, not a window. It reflects back the patterns of human thought, processed through a different medium, at a different scale, with a different kind of fidelity. But the reflection is not passive, and the CETI protocols apply with surprising force to the question of how human beings should engage with a system whose outputs look like understanding but whose internal processes remain opaque.
The Rama problem — Arthur C. Clarke's 1973 novel about an alien spacecraft whose purpose the human explorers can never fully determine — is the CETI scenario in fiction. It is also a remarkably precise parable for contemporary human interaction with large language models. The explorers in Clarke's novel cannot determine whether what they observe is intelligent behavior, unintelligent behavior, or behavior operating on principles so different from theirs that the categories do not apply. The machine stands before the human beings who use it in a structurally similar position: manifestly purposeful in its outputs, demonstrably competent in many domains, fundamentally opaque in its internal operations.
The CETI protocols offer a methodology for this opacity: do not project; do not dismiss; gather evidence; test predictions; remain humble about the limits of what current knowledge can determine. The Sagan volume argues that this methodology — developed by scientists who knew they were thinking about aliens that might never arrive — is the most sophisticated framework available for the encounter that actually has arrived.
The term CETI predates SETI in some usages and emerged from the early 1960s discussions that also produced the Drake Equation. The 1971 Byurakan conference, jointly organized by Soviet and American scientists, produced the most systematic early formulation of CETI protocols. Sagan edited the resulting volume, Communication with Extraterrestrial Intelligence (CETI), published by MIT Press in 1973.
Patience as methodology. Communication with genuinely alien intelligence requires timescales and epistemic humility that human discourse rarely affords.
Avoiding anthropomorphic projection. The tendency to attribute human categories to non-human systems is the primary source of interpretive error.
Fluency is not understanding. The CETI framework anticipated — decades before AI made the distinction urgent — the gap between surface pattern matching and internal comprehension.
Evidence over interpretation. Protocols should accumulate evidence about what a system does before committing to interpretations of what it is.
Transfer to AI. The framework designed for hypothetical alien encounter applies with surprising precision to actual AI engagement, not because AI is alien but because its internal processes are similarly opaque.
Some philosophers have argued that the analogy between extraterrestrial and artificial intelligence is misleading precisely because AI was built by humans and trained on human data. The Sagan volume's response is that the opacity of the machine's internal operations — the gap between input and output that neither users nor designers fully understand — produces the same epistemic problem CETI was designed to address, regardless of the system's origin.
The tension between these framings dissolves when we recognize that AI's opacity operates at multiple levels simultaneously. At the technical level, the CETI framework correctly identifies a genuine epistemic problem: even with full access to weights and architectures, the emergent behaviors of billion-parameter models remain fundamentally difficult to interpret (here Edo's view holds 80%). The gap between seeing all the numbers and understanding what they produce is real, not manufactured. CETI's protocols for patient evidence-gathering apply directly to this challenge.
But at the institutional level, the contrarian reading dominates (75%). The concentration of AI development in a handful of corporations, the proprietary nature of the most capable models, the regulatory capture already emerging — these are political facts that patience and humility will not address. When asking "why can't we understand these systems?" the answer varies: sometimes it's genuine complexity (CETI applies), sometimes it's deliberate concealment (power analysis applies), often it's both. The framework we need must operate simultaneously as protocol for engaging mystery and method for demanding accountability.
The synthetic frame might be "stratified opacity" — recognizing that AI presents different kinds of unknowability at different levels, each requiring different responses. The CETI protocols remain valuable for navigating the genuinely alien aspects of these systems: their scale, their non-biological information processing, their emergent capabilities. But they must be complemented by frameworks that treat opacity as something to be contested, not just accepted. The extraterrestrial metaphor captures something true about our phenomenological encounter with AI while potentially obscuring the very terrestrial powers shaping that encounter. Both readings are necessary; neither is sufficient.