For the entire history of computing before 2022, using a computer meant translation. A user had an idea, and she compressed it into a language the machine could parse — a command, a menu selection, a typed query structured to match the machine's expected formats. Each decade the translation got easier, but it never disappeared. The command line was a foreign language studied for years. The GUI was a simplified version. The touchscreen was simpler still. In every case, the human was the one doing the adapting — learning the machine's metaphors, thinking in shapes the machine determined, reformulating intentions into structures the software could process.
The translation cost has been, throughout the history of computing, the single largest tax on what humans can accomplish with machines. It has excluded populations that could not or would not learn the machine's vocabulary. It has compressed complex intentions into simplified forms that the machine could handle, losing information at every stage. It has consumed a substantial fraction of the cognitive bandwidth of every user, which has been unavailable for the actual work the tools were supposed to serve.
The large language model abolishes this tax. Not reduces it. Abolishes it. A user can describe what she wants in the same language she would use with a skilled human collaborator — her language, with its mess and implications and half-finished sentences. The machine understands well enough to respond with something useful, something that demonstrates not just comprehension of her words but interpretation of her intent. The cognitive overhead of translation, which had been in place for fifty years, is gone.
The consequences are larger than they first appear. When you abolish a tax that has suppressed an activity for fifty years, you discover that the suppressed activity is larger than anyone imagined. The range of what humans can now attempt with machines expands not by a factor that reflects the tools' increased capability, but by a factor that reflects the tools' capability multiplied by the removal of the translation tax. This is why the orange pill threshold feels like a qualitative shift rather than a quantitative one: it is not just that the tools are better, but that the tax is gone.
In Smith's framing, the translation cost was a form of friction that the division of labour had partly addressed by creating specialized translators — programmers, technical writers, IT support staff, trainers. The abolition of the translation cost collapses the market for these intermediary roles. The engineer no longer needs the documentation writer to translate user needs into technical requirements, because the AI can hold the user's language and the machine's operations simultaneously. The intermediary specialists face the same challenge every intermediated profession faces when the intermediation becomes unnecessary.
The concept appears throughout The Orange Pill, with the fullest development in Chapter 3 (pp. 38-46), "When the Machine Learned Our Language."
The underlying observation — that interface design has historically required users to adapt to machines — is a staple of human-computer interaction research, associated with researchers including Donald Norman and Ben Shneiderman.
Fifty-year tax. Every computer interface before 2022 required users to translate intention into machine-acceptable form, at substantial cognitive cost.
Abolition, not reduction. Natural language interfaces do not reduce the translation cost; they eliminate it for a significant class of interactions.
Suppressed activity revealed. Removing a long-standing tax reveals that the activity it suppressed was larger than anyone imagined — the explanation for the AI transition's speed.
Intermediary collapse. Specialized translator roles — documentation, technical writing, prompt engineering — face the market contraction every intermediated profession faces when intermediation becomes unnecessary.