For the entire history of computing, using a computer meant translation. You had an idea, and you compressed it into a language the machine could parse. Each decade the translation got easier, but it never disappeared. You still met the machine on its terms.
In 2025, the machine learned to meet you on yours.
Cyborg. Illustration by Edo Segal. Ink, 1985. Part of a book of poems I authored while working on Expert Systems
Previous interface transitions moved the human closer to the machine. The graphical user interface, or GUI, made the machine's operations visible. The touchscreen made them tactile. But in every case, the human was the one doing the adapting – learning metaphors, thinking in shapes the machine determined, reformulating intentions into structures the software could process.
The command line was a foreign language you studied for years. The GUI was a simplified version. The Internet changed our relationship with information and media consumption. You got better at the machine's way of thinking. We adapted to the new tools.
The large language model reversed that relationship entirely.
For the first time, you could describe what you wanted in the same language you'd use with a brilliant colleague. Not simplified language. Not structured language. Your language, with all its mess and half-finished sentences and implications and the thing you meant but didn't quite say. The machine understood well enough to respond with something useful, something that demonstrated not just comprehension of your words but interpretation of your intent.
Speed is quantitative, and quantitative improvements are easy to celebrate and easy to dismiss. This was qualitative: The difference between sending a text and having a conversation. You do not use a text and a conversation for the same purposes, and the things you can accomplish through conversation – the exploration, the impromptu back-and-forth, the gradual refinement of a half-formed thought into something precise – are categorically unavailable to someone limited to text, no matter how fast your thumbs move.
I had been building with AI tools for years by the time this breakthrough emerged. I knew what they could do and where they broke. I was not naive. And yet, in those weeks around the turn of the year, something changed that I was not prepared for. The tools crossed a line. Speed was part of it, and accuracy had improved, too, but the line crossed in December 2025 had to do with the quality of the conversation itself. I would describe a problem to Claude straight from the messiness of my mind, and Claude would respond not with a literal translation of my words, but with an interpretation. A reading. An inference about what I was actually trying to do, informed by everything I had said before and everything it had been trained on.
I felt met. Not by a person. Not by a consciousness. But by an intelligence that could hold my intention in one hand and the technical implications in the other and show me a path between them I had not seen.
The interface did improve. It was a step change.
There is a moment I keep returning to. We were thirty days from CES, and Napster Station, an AI-powered concierge kiosk built to serve customers in high-volume environments, did not exist outside of my brain yet. No software, no hardware, no industrial design, no optics, no audio routing, and no conversational AI model that would let the device hold the live conversations with hundreds of strangers on a showfloor that I envisioned. Thirty days later, it was doing just that and delivering unique AI generated music tracks to people across a wide variety of requests, contexts, and languages.
Under normal circumstances, a product like this takes quarters. Multiple teams, sequential handoffs, spec documents that lose fidelity at every stage. The breadth AI provides, combined with the depth of expertise and dedication on our team, and the interdisciplinary skill to tie it all together, made it real.
During those 30 days, I was building a component for Station that needed to handle detecting the users face and when they are speaking. I knew what I wanted, but I could not have written the implementation myself – not in the time available, maybe not at all. In the old world, I would have written a spec, handed it to an engineer, waited for questions, answered the questions, reviewed the result, requested changes. The cycle would have taken weeks or months, and the spec itself would have been a translation exercise: compressing what I could see in my head into a format that a developer could execute. Half of what I meant would have been lost in that compression. The other half would have arrived distorted by the gap between my vocabulary and theirs.
With Claude, I described the problem in plain English. My plain English. I said what the thing needed to do, what the user would experience, what failure would look like. Claude came back with an implementation that wasn't perfect but was close enough that fifteen minutes of conversation got it the rest of the way. The whole interaction took less than an hour. What struck me was not the speed, though the speed was absurd. It was that I never had to leave my own way of thinking. I never had to translate. I never had to compress what I meant into a format that would survive the journey to someone else's understanding.
The most time-consuming part of the journey just disappeared.
The revolution we’re witnessing is not about just what the machine can do. It is what the machine stops asking you to do in order to use it. Every previous tool required you to reshape your thinking into a form the tool could accept. Now, the cognitive overhead of translation, the tax that every interface has levied on every user since the first command line, has been abolished. And when you abolish a tax that has been in place for fifty years, you discover that the economy it was suppressing is larger than anyone imagined.
Benjamin Lee Whorf proposed that the language you speak shapes the thoughts you can think. The strong version of the hypothesis has been discredited, but the weak version – that the categories available in your language make certain concepts easier to think and others harder – has mostly held up under decades of research. Russian speakers, who have separate words for light blue and dark blue, distinguish between those shades faster than English speakers do. The tools of thought shape the thoughts that can be had.
If this is true of natural languages, it is emphatically true of programming languages.
If you coded in C, you thought about memory allocation, because C forced you to. If you used a spreadsheet, you thought in rows and columns. Each tool was a cognitive environment, and every cognitive environment has walls. The walls were useful: They gave structure, enforced discipline, and made certain levels of rigor unavoidable.
But they also constrained what you could attempt. Part of your cognitive bandwidth was always consumed by translation overhead and thinking about how to express your idea in the tool's language, rather than thinking about the idea itself. Just a few months ago, when the interface became natural language, the walls dissolved.
You were no longer thinking code-shaped thoughts or spreadsheet-shaped thoughts. You were thinking human-shaped thoughts.
And the machine was meeting you there.
The implications took months to become visible, because most people initially used the new tool to do old things faster. Write boilerplate. Debug existing code. Generate documentation. This is what happens with every interface revolution: The first films were photographed stage plays. The first websites were digitized brochures. The first radio broadcasts were people reading newspapers aloud.
Even when the medium changes, the imagination takes time to catch up. The river of possibilities is clogged up by old notions about input and output.
The real shift came when people stopped trying to do old things faster and started attempting things they would never have tried before. I watched it happen on my own team. An engineer who had spent years working exclusively on backend systems started building user interfaces, not because she had learned frontend development, but because the conversation with Claude let her describe what the interface should feel like in human terms, and the tool handled the translation into code she'd never written. The boundary between what she could imagine and what she could build had moved so far that her job description changed in a week.
She was not doing her old work faster. She was doing different work. Work she had always wanted to do but could never reach because the implementation consumed her bandwidth.
The actual problem, the one that had drawn her to engineering in the first place, had been buried under layers of translation for her entire career. The tool did not make her faster. It made her free. Free to work on the thing that mattered, rather than the infrastructure required to reach it.
The tied hand was not a lack of intelligence. It was the translation barrier: the gap between the foundational, procedural way she thought and what the end user would want to see.
I watched this happen with a designer on the Napster team, too, who had spent his career in visual interfaces. He had never touched backend code. He thought in shapes, in colors, in the feel of a user interaction. Within two weeks of working with Claude, he was building complete features – not just designing them, but implementing them, end to end.
In both cases, AI was expanding the space of what a single person could attempt, because the translation cost that had previously gated ambition had collapsed. It was like removing scaffolding from a building to reveal the architecture beneath. The scaffolding had been necessary to build. But it was never the building.
I heard the shift toward “different work” described in a hundred different vocabularies by every builder I’ve spoken to in recent months. In many cases, it was strategic work. Judgment-based work. The work of deciding what should exist in the world.
That work had always been there; it was just buried under layers of translation and implementation. Now, with the bones dug up and laid out in front of us, we see work that’s both more interesting than we could’ve imagined and more demanding than we’d anticipated.
Some people run from that mixed reality. Some embrace it. Some wait and hope what’s next will make the choice a little more clear.
Fight or flight. Excitement and fear. Terror and awe. They both exist here, side-by-side, at the new frontier of human capability.