Useful imprecision is the discipline of operating with explicit, defensible, revisable estimates rather than either false precision (treating uncertain numbers as exact) or false humility (refusing to estimate at all). Damodaran's pedagogical insistence is that every valuation is wrong — the growth rate will not be what was projected, the margins will diverge from the trajectory, the discount rate will not capture the risks that actually materialize, the terminal value is a guess dressed in formula. Accepting this liberates the analyst from the pursuit of certainty and redirects effort toward the more achievable goal: being less wrong than the market and the competing analysts. The discipline requires explicit assumptions, sensitivity analysis around them, and willingness to revise when evidence changes. It is the antidote to both the spreadsheet-as-truth fallacy and the analysis-paralysis that uncertainty produces in less disciplined practitioners.
The concept is closely tied to the narrative-to-numbers bridge. Useful imprecision is what the bridge produces when honestly executed: a specific intrinsic value estimate based on specific narrative assumptions, with the assumptions surfaced clearly enough that the analyst can revise them when reality intrudes. The opposite — false precision — is what the bridge produces when dishonestly executed: numbers that look authoritative but rest on assumptions the analyst has not stress-tested.
The discipline matters most during narrative transitions because that is when uncertainty is greatest and the temptation toward both false precision and false humility is strongest. False precision says: "My DCF model produces $245.32 per share intrinsic value." False humility says: "There is too much uncertainty to value this company, so I will not bother." Useful imprecision says: "Under these explicit assumptions about ecosystem share, growth trajectory, and terminal margins, intrinsic value is approximately $200-250 billion, with the largest sensitivity to the ecosystem-share assumption; here is how I would revise the estimate if specific evidence changes."
The standard is operational, not aspirational. It produces specific actions: build the model with explicit assumptions; vary each assumption to test sensitivity; weight scenarios by probability; report the range and the dominant sensitivities; revise when evidence accumulates. The discipline is procedural enough that it can be taught, practiced, and audited.
The connection to AI-era valuation is direct. AI tools generate spreadsheets that look exact — the numbers are precise to the penny because spreadsheets compute precisely. The precision is misleading because it conceals the assumptions underneath. Useful imprecision requires reading every AI-generated valuation as a hypothesis: under what assumptions did the model produce this number, and how sensitive is the conclusion to revision of those assumptions? The discipline survives the AI tool; the false precision the tool enables does not.
Damodaran has articulated the concept across decades of teaching, with the clearest formulations in Investment Valuation, The Dark Side of Valuation, and his Musings on Markets blog. The intellectual ancestry runs through Keynes's observation that it is better to be roughly right than precisely wrong.
Every valuation is wrong. Accept this and redirect effort from the pursuit of certainty to the pursuit of being less wrong.
Explicit assumptions enable revision. Surface what you assumed so you can update when evidence changes.
Sensitivity analysis is the test. Vary each assumption to identify which ones the conclusion depends on.
The standard is operational. Useful imprecision produces specific procedures, not vague disclaimers.