For the entire history of computing, using a computer meant translation. You had an idea, and you compressed it into a language the machine could parse. Each decade the translation got easier, but it never disappeared. You still met the machine on its terms.
In 2025, the machine learned to meet you on yours.
Cyborg. Illustration by Edo Segal. Ink, 1985. Part of a book of poems I authored while working on Expert Systems
Previous interface transitions moved the human closer to the machine. The graphical user interface, or GUI, made the machine's operations visible. The touchscreen made them tactile. But in every case, the human was the one doing the adapting – learning metaphors, thinking in shapes the machine determined, reformulating intentions into structures the software could process.
The command line was a foreign language you studied for years. The GUI was a simplified version. The Internet changed our relationship with information and media consumption. You got better at the machine's way of thinking. We adapted to the new tools.
The large language model reversed that relationship entirely.
For the first time, you could describe what you wanted in the same language you'd use with a brilliant colleague. Not simplified language. Not structured language. Your language, with all its mess and half-finished sentences and implications and the thing you meant but didn't quite say. The machine understood well enough to respond with something useful, something that demonstrated not just comprehension of your words but interpretation of your intent.
Speed is quantitative, and quantitative improvements are easy to celebrate and easy to dismiss. This was qualitative: The difference between sending a text and having a conversation. You do not use a text and a conversation for the same purposes, and the things you can accomplish through conversation – the exploration, the impromptu back-and-forth, the gradual refinement of a half-formed thought into something precise – are categorically unavailable to someone limited to text, no matter how fast your thumbs move.
I had been building with AI tools for years by the time this breakthrough emerged. I knew what they could do and where they broke. I was not naive. And yet, in those weeks around the turn of the year, something changed that I was not prepared for. The tools crossed a line. Speed was part of it, and accuracy had improved, too, but the line crossed in December 2025 had to do with the quality of the conversation itself. I would describe a problem to Claude straight from the messiness of my mind, and Claude would respond not with a literal translation of my words, but with an interpretation. A reading. An inference about what I was actually trying to do, informed by everything I had said before and everything it had been trained on.
I felt met. Not by a person. Not by a consciousness. But by an intelligence that could hold my intention in one hand and the technical implications in the other and show me a path between them I had not seen.
The interface did improve. It was a step change.
There is a moment I keep returning to. We were thirty days from CES, and Napster Station, an AI-powered concierge kiosk built to serve customers in high-volume environments, did not exist outside of my brain yet. No software, no hardware, no industrial design, no optics, no audio routing, and no conversational AI model that would let the device hold the live conversations with hundreds of strangers on a showfloor that I envisioned. Thirty days later, it was doing just that and delivering unique AI generated music tracks to people across a wide variety of requests, contexts, and languages.
Under normal circumstances, a product like this takes quarters. Multiple teams, sequential handoffs, spec documents that lose fidelity at every stage. The breadth AI provides, combined with the depth of expertise and dedication on our team, and the interdisciplinary skill to tie it all together, made it real.
During those 30 days, I was building a component for Station that needed to handle detecting the users face and when they are speaking. I knew what I wanted, but I could not have written the implementation myself – not in the time available, maybe not at all. In the old world, I would have written a spec, handed it to an engineer, waited for questions, answered the questions, reviewed the result, requested changes. The cycle would have taken weeks or months, and the spec itself would have been a translation exercise: compressing what I could see in my head into a format that a developer could execute. Half of what I meant would have been lost in that compression. The other half would have arrived distorted by the gap between my vocabulary and theirs.
With Claude, I described the problem in plain English. My plain English. I said what the thing needed to do, what the user would experience, what failure would look like. Claude came back with an implementation that wasn't perfect but was close enough that fifteen minutes of conversation got it the rest of the way. The whole interaction took less than an hour. What struck me was not the speed, though the speed was absurd. It was that I never had to leave my own way of thinking. I never had to translate. I never had to compress what I meant into a format that would survive the journey to someone else's understanding.
The most time-consuming part of the journey just disappeared.
The revolution we’re witnessing is not about just what the machine can do. It is what the machine stops asking you to do in order to use it. Every previous tool required you to reshape your thinking into a form the tool could accept. Now, the cognitive overhead of translation, the tax that every interface has levied on every user since the first command line, has been abolished. And when you abolish a tax that has been in place for fifty years, you discover that the economy it was suppressing is larger than anyone imagined.
Benjamin Lee Whorf proposed that the language you speak shapes the thoughts you can think. The strong version of the hypothesis has been discredited, but the weak version – that the categories available in your language make certain concepts easier to think and others harder – has mostly held up under decades of research. Russian speakers, who have separate words for light blue and dark blue, distinguish between those shades faster than English speakers do. The tools of thought shape the thoughts that can be had.
If this is true of natural languages, it is emphatically true of programming languages.
If you coded in C, you thought about memory allocation, because C forced you to. If you used a spreadsheet, you thought in rows and columns. Each tool was a cognitive environment, and every cognitive environment has walls. The walls were useful: They gave structure, enforced discipline, and made certain levels of rigor unavoidable.
But they also constrained what you could attempt. Part of your cognitive bandwidth was always consumed by translation overhead and thinking about how to express your idea in the tool's language, rather than thinking about the idea itself. Just a few months ago, when the interface became natural language, the walls dissolved.
You were no longer thinking code-shaped thoughts or spreadsheet-shaped thoughts. You were thinking human-shaped thoughts.
The implications took months to become visible, because most people initially used the new tool to do old things faster. Write boilerplate. Debug existing code. Generate documentation. This is what happens with every interface revolution: The first films were photographed stage plays. The first websites were digitized brochures. The first radio broadcasts were people reading newspapers aloud.
Even when the medium changes, the imagination takes time to catch up. The river of possibilities is clogged up by old notions about input and output.
The real shift came when people stopped trying to do old things faster and started attempting things they would never have tried before. I watched it happen on my own team. An engineer who had spent years working exclusively on backend systems started building user interfaces, not because she had learned frontend development, but because the conversation with Claude let her describe what the interface should feel like in human terms, and the tool handled the translation into code she'd never written. The boundary between what she could imagine and what she could build had moved so far that her job description changed in a week.
She was not doing her old work faster. She was doing different work. Work she had always wanted to do but could never reach because the implementation consumed her bandwidth.
The actual problem, the one that had drawn her to engineering in the first place, had been buried under layers of translation for her entire career. The tool did not make her faster. It made her free. Free to work on the thing that mattered, rather than the infrastructure required to reach it.
The tied hand was not a lack of intelligence. It was the translation barrier: the gap between the foundational, procedural way she thought and what the end user would want to see.
I watched this happen with a designer on the Napster team, too, who had spent his career in visual interfaces. He had never touched backend code. He thought in shapes, in colors, in the feel of a user interaction. Within two weeks of working with Claude, he was building complete features – not just designing them, but implementing them, end to end.
In both cases, AI was expanding the space of what a single person could attempt, because the translation cost that had previously gated ambition had collapsed. It was like removing scaffolding from a building to reveal the architecture beneath. The scaffolding had been necessary to build. But it was never the building.
I heard the shift toward “different work” described in a hundred different vocabularies by every builder I’ve spoken to in recent months. In many cases, it was strategic work. Judgment-based work. The work of deciding what should exist in the world.
That work had always been there; it was just buried under layers of translation and implementation. Now, with the bones dug up and laid out in front of us, we see work that’s both more interesting than we could’ve imagined and more demanding than we’d anticipated.
Some people run from that mixed reality. Some embrace it. Some wait and hope what’s next will make the choice a little more clear.
Fight or flight. Excitement and fear. Terror and awe. They both exist here, side-by-side, at the new frontier of human capability.
The set of configurations reachable in one step from the current state — evolution explores this space incrementally, never leaping, constrained by what already exists.
Gibson's load-bearing concept: the possibilities for action an environment offers a particular organism — real, relational, value-laden, and present whether or not anyone perceives them.
The deliberate construction of AI-assisted practice environments that reverse the default — using the tool to generate difficulty rather than eliminate it, to widen the gap between attempt and…
The flow state produced specifically by sustained AI collaboration — maintained by the interface rather than by the individual, with neurological consequences that traditional flow research did not…
The class of software produced when a developer describes intent in natural language and a language model returns working implementation across the full technology stack — the most powerful…
Engelbart's foundational distinction: automation removes the human from the loop, augmentation redesigns the loop so the human's participation becomes more powerful. The most consequential design…
The difference between two responses to the same level of sympathetic arousal — determined not by the magnitude of arousal but by whether vagal engagement accompanies it, shaped by social,…
The compound emotional state of witnessing something magnificent that is also destroying something beloved — accommodation that succeeds cognitively while extracting irreducible emotional cost.
The 2023 thought experiment by Fung and Lessig: an AI system designed to maximize electoral victory through personalized, adaptive persuasion — hypothetical in form, technically trivial in practice.
Engelbart's assumption that the human and the tool would evolve together at approximately balanced rates — and the structural diagnosis of what happens when the tool accelerates beyond the human's…
Juma's claim that technologies and institutions shape each other simultaneously — not a linear sequence in which society catches up to technology, but a mutual influence that determines what the…
Hutchins's framework for the total web of mutual dependencies among cognitive elements — the insistence that cognition cannot be understood by examining agents in isolation from the environments that…
Egan's technical term for the specific mental capacities — narrative, metaphor, binary opposition, the sense of wonder, systematic generalization, reflexive examination — that each kind of…
The compression of multi-actor translation chains — designer → spec → developer → code → product — into AI-mediated exchanges, removing signal loss and eliminating the boundary encounters where…
Egan's irreducible core of education — the specific quality of interaction in which an adult's more sophisticated understanding meets a child's developing understanding to produce cognitive…
The contextual work of rendering insight from one community intelligible to another — the irreducibly human bridging function that AI does not perform.
The Orange Pill claim — that AI tools lower the floor of who can build — submitted to Sen's framework, which asks the harder question: does formal access convert into substantive capability expansion?
The expansion of who can produce software via AI tools — read through Dijkstra's framework not as empowerment but as the distribution of a new and particularly dangerous form of ignorance: the…
Alan Kay's term for the trajectory by which the personal computer — designed as a medium for active creative engagement — became a medium for passive consumption, and the pattern the AI moment…
Not conversation but the encounter between conscious subjects committed to joint investigation of shared reality — the practice of freedom itself.
The critical reframing of the AI productivity gain as a widening of the domains each person can address — not a volumetric increase in output — with consequences that invert the logic of headcount…
The AI-era reversal by which guilt flips its direction — from 'I should stop working' to 'I should stop being present' — dismantling the internal mechanism that once preserved the domestic boundary.
March's foundational 1991 distinction between the refinement of existing capabilities and the search for new ones — two activities that compete for the same finite resources, with the competition…
The two adaptive responses to acute threat — commit to engagement or retreat to safer ground — that the AI transition reveals as both inadequate to a disruption that does not resolve into a finite…
Licklider's category for the cognitive work that happens before a problem has been specified — the messy, associative, exploratory process of figuring out what the question actually is.
Eno's term — coined in 1995 — for music produced by systems rather than composed note by note, in which the creator designs conditions for emergence rather than determining the output, and which…
Buber's category for the mode of conversation in which each party is genuinely open to the other, responsive to what the other brings, and changed by the encounter — distinguished from technical…
Engelbart's formalization of the augmented system: Humans using Language, Artifacts, Methodology, and Training. Every component shapes every other, and improving one in isolation as likely degrades…
The integration of human consciousness and artificial intelligence into a cognitive partnership that produces emergent capabilities neither system possesses alone — the contemporary fulfillment of…
Segal's term for the gap between what a person can conceive and what they can produce — which AI collapsed to approximately the length of a conversation, and which Gopnik's framework reveals to be an…
The process by which an artificial tool becomes so naturalized it is experienced as self rather than other—writing as thought, the car as mobility.
Deeply ingrained assumptions shaping perception and action—Senge's second discipline, the fishbowl water that must be surfaced before organizations can navigate change.
The emerging class of AI systems that accept sketches, gestures, and spatial manipulation alongside natural language — the logical continuation of the interface revolution Tversky's framework…
The information-theoretic analysis of natural language as the highest-bandwidth encoding system humans possess — near-optimal for propositional content, lossy below the entropy rate for embodied,…
The 2020s interface paradigm in which the user describes desired outcomes in natural language and receives executable code — the ultimate abstraction layer in Dijkstra's sense, concealing not merely…
The physicist's concept for discontinuous system reorganization — water to ice, coordination to judgment — that the Goldratt simulation uses to describe the AI moment's character.
Skenazy's operational alternative to both prohibition and permissiveness — providing structure without providing control, granting access with adult-supported reflection rather than adult…
Wood, Bruner, and Ross's 1976 concept for the responsive support that enables a learner to accomplish what exceeds independent capability — structured so that every function exists to be withdrawn.
The critical design distinction — borrowed from developmental psychology and pressed into service for AI — between tools that support cognitive effort and tools that eliminate it, determining whether…
Morozov's term for the reflexive conversion of every human experience into a technical problem awaiting its fix — the ideology that depoliticizes inherently political questions by recasting them as…
The first and most fundamental of the directional affordances: the possibility, unprecedented in computing history, of describing desired outcomes in natural language and receiving implementations —…
Austin and Searle's framework that language performs actions—promises, requests, declarations—rather than merely transmitting information, revealing what AI's linguistic competence lacks.
The capacity for evaluative judgment under conditions of abundance — distinguishing the excellent from the adequate when competent creative output is cheap, fast, and universally accessible.
Acemoglu's proposal to rebalance the tax code — which currently subsidizes capital and taxes labor — so that firms choosing between automating and hiring face prices that reflect social rather than…
Mintzberg's insistence that management is not a science or a profession but a craft — tacit knowledge built through practice, irreducible to rules, and therefore unreplicable by any machine that…
The role whose contribution—aesthetic vision, taste-driven specification, curation of machine outputs—becomes the highest-leverage input when AI commoditizes execution.
The hybrid writer constituted by the entanglement between a human and an AI — an entity whose output cannot be cleanly decomposed into human and machine contributions, and whose existence dissolves…
Kelly's name for the observable pattern that the technium produces more options, more capabilities, more connections, and more new problems across every period for which records exist — the…
The structural feature of computing from 1960 to 2024 that every interface innovation — command line, GUI, touchscreen, voice — narrowed but could not eliminate: the cognitive cost of translating…
Daniel Dennett's strategy of treating a system as if it had beliefs, desires, and rationality — a pragmatic alternative to metaphysical debates about what "really" has a mind.
Tegmark's diagnosis of the winter-2025 phase transition as primarily an interface revolution—the machine learning human language—rather than a capability revolution, which explains the discontinuous…
Brooks's meditation, in the opening of The Mythical Man-Month, on the specific satisfactions of building software — joys that AI has intensified without changing in kind, and whose intensification…
Argyris's model of the rapid, invisible inferential steps by which practitioners move from observable data to confident conclusions — and the diagnostic instrument for why fluent AI output produces…
Kay's insistence that the purpose of a computing medium is to transform the user's thinking, not to maximize production — and his charge that the AI industry has confused the two.
McLuhan's 1964 axiom that the form of a medium — not its content — produces its deepest effects, restructuring perception and social organization beneath the level of awareness.
The unmeasured cognitive cost of evaluating AI-generated outputs across multiple projects — a tax paid in degraded judgment quality invisible to productivity metrics.
McLuhan's diagnostic for the structural tendency to understand new media through the categories of the media they replace — driving into the future while looking backward.
Bush's memex as intimate collaborator in thinking—not a servant executing commands but a partner holding ideas in relationship, surfacing connections, responding to half-formed inquiry. LLMs realize…
The tax every previous computer interface levied on every user — the cognitive overhead of converting human intention into machine-acceptable form. The tax natural language interfaces have abolished.
The operation by which one actant speaks for, stands in for, or represents another — always transformation, never neutral transmission. The central mechanism by which networks are built and the…
The Italian proverb traduttore, traditore — translator, traitor — encoded as Hofstadter's diagnostic for every act of representational conversion, including the conversion of human intention into…
Benjamin Lee Whorf's proposal that the language a person speaks shapes the thoughts that person can think — whose weaker empirically supported version underwrites Murray's claim that expression is…
Vygotsky's most cited and most widely misused concept — the dynamic, relational space between what a learner can accomplish independently and what becomes possible with calibrated guidance, and the…
Anthropic's command-line coding agent — the specific product through which the coordination constraint shattered in the winter of 2025, reaching $2.5B run-rate revenue within months.
Neural networks trained on internet-scale text that have, since 2020, demonstrated emergent linguistic and reasoning capabilities — in Whitehead's vocabulary, computational systems whose prehensions…
Licklider's 1960 paper in the IRE Transactions on Human Factors in Electronics — ten pages that specified, with uncanny precision, the architecture of the partnership that would not arrive for…
Lakoff and Johnson's 1980 book that overturned the view of metaphor as ornament and established it as a structural feature of thought — one of the most cited works in the cognitive sciences.
The AI-powered conversational concierge kiosk that Edo Segal's team at Napster built in thirty days for CES 2026 — the Orange Pill's central case of AI-accelerated specific-purpose design, read…
Grace Hopper's 1952 program that translated human-readable mathematical notation into binary machine code — the first compiler, the founding demonstration that the machine could meet the human…
Dijkstra's 1972 Turing Award lecture — the fullest statement of his conception of programming as a branch of applied mathematics requiring the specific intellectual virtue of knowing the limits of…
Winograd and Flores's 1986 manifesto arguing that contrary to current belief, one cannot construct machines that exhibit intelligent behavior—a Heideggerian bomb dropped into the AI establishment.
Dreyfus's 1972 landmark, revised in 1992, arguing that human intelligence is fundamentally embodied, situated, and rooted in practical engagement with the world—and therefore cannot be replicated by…