Question engineering is the discipline of formulating a question such that a capable answering system produces a useful answer. When answers are expensive, the bottleneck is producing them. When answers are cheap (as in the LLM era), the bottleneck is in formulating questions that yield answers worth having. Isaac Asimov's Multivac stories are the first fictional treatment of the pattern; contemporary prompt engineering is the applied discipline.
This is probably the single most undervalued practical skill of the AI era. The person who knows how to ask a language model gets useful outputs; the person who does not gets generic outputs. The gap between the two is widening, and it maps onto existing educational and professional divides in ways that are just beginning to be studied.
The Multivac stories — "Franchise" (1955), "The Last Question" (1956), "The Machine that Won the War" (1961) — all turn on the same insight: a sufficiently capable answering machine is bottlenecked by the human who asks it. "The Machine that Won the War" goes further: the human users are not even aware they are the bottleneck, and they attribute to the machine decisions that they themselves made. The Asimovian warning is that answer-rich systems do not reduce the importance of human judgment; they concentrate it in the moment of question-asking.
Contemporary prompt engineering has developed a small vocabulary: chain-of-thought prompting (ask the model to reason step by step), few-shot prompting (give examples of the desired output format), role prompting (ask the model to adopt a persona), context loading (provide documents or background), tool use (let the model call external functions), and iterative refinement (one prompt to plan, a second to execute, a third to review). The skills are real and teachable; they do not require AI expertise, but they do require patience and care.
The deeper skill, the one that most practitioners struggle to articulate, is knowing what questions to ask in the first place. Any student can be taught to refine a prompt. The more valuable skill — knowing that a particular problem has an AI-answerable form and what that form is — is the one that separates effective AI-era workers from ineffective ones. This skill is not new; it is the classical skill of the good researcher, consultant, or teacher. AI systems have made it unusually lucrative.
The insight that the quality of the question determines the quality of the answer is older than computing. Claude Shannon, reflecting on information theory, repeatedly noted that a message's meaning depends as much on the receiver's model as on the message itself. The modern discipline of prompt engineering emerged in 2020–2022 as practitioners accumulated folklore about GPT-3 and later models; the first systematic treatments appeared in 2022–2024 and are still a moving target.
Specificity without over-constraint. A good question is specific enough to admit only useful answers and broad enough that the best answer is not the one you already had in mind.
Context as half the prompt. Modern AI systems respond to what the prompt lets them infer about the requester's situation. Loading relevant context (documents, examples, role) often matters more than wording the direct question cleverly.
Iteration, not one-shot. The question-answer-refine loop, not the single perfect prompt. Most effective uses of AI involve multiple exchanges, not one omniscient query.
Chain-of-thought prompting. Explicitly asking the model to reason step by step often produces dramatically better results than asking for the final answer directly.
The Multivac lesson. Asimov's stories repeatedly dramatize that a powerful answering system makes the quality of the asker more important, not less.
Prompts as documents. Serious AI workflows increasingly treat prompts as artifacts worth versioning, testing, and evolving — closer to code than to conversation.