You On AI Encyclopedia · Velocity Metrics Critique The You On AI Encyclopedia Home
Txt Low Med High
CONCEPT

Velocity Metrics Critique

The Goldratt simulation's diagnosis of story points, sprint velocity, and related Agile metrics as measurements of the non-constraint — locally rational, systemically misleading in the AI era.
Velocity Metrics Critique is the Goldratt simulation's application of constraint theory to the measurement frameworks dominating contemporary software development. Story points, sprint velocity, feature counts, deployment frequency — the standard metrics of Agile and DevOps culture — measure the rate at which engineering produces output. When coordination was the constraint, these metrics had some validity as indirect proxies for system throughput, since engineering output was limited by coordination and improving coordination improved engineering output. In the AI era, the metrics have become measurements of the non-constraint, and their celebration produces the exact pattern Goldratt spent his career diagnosing: locally improving metrics, systemically degrading outcomes.
Velocity Metrics Critique
Velocity Metrics Critique

In The You On AI Encyclopedia

The critique applies with specific force to sprint velocity tracking. Sprint velocity measures how many story points a team completes in a sprint. Teams that increase their velocity are celebrated as improving; teams whose velocity declines are examined for dysfunction. The measurement assumes that completed story points correspond to system value — that more points completed means more value delivered. This assumption holds only if the features represented by the points have been evaluated, validated, and found worth building. In AI-augmented teams, where generation capacity vastly exceeds evaluation capacity, the assumption breaks. Velocity can increase while system throughput — the rate at which genuine value reaches users — stagnates or declines.

The specific mechanism is the one Goldratt diagnosed repeatedly in manufacturing: local optimization of a non-constraint produces inventory, not throughput. AI-augmented engineers can generate features faster than product managers can evaluate them, QA can test them, and users can absorb them. The features ship — velocity increases — but the downstream capacity to validate their value has not scaled. The system accumulates features of uncertain value, creating cognitive inventory and product incoherence that will eventually manifest as maintenance burden, user confusion, and competitive vulnerability.

Local Optima Trap
Local Optima Trap

The critique extends to deployment frequency, pull request counts, code review throughput, and virtually every quantitative metric of engineering activity. Each measures the rate of a non-constraint and implicitly assumes the constraint is elsewhere. The measurement frameworks were designed for an era when they were approximately correct. In the AI era, they are systematically misleading. Organizations celebrating them are celebrating what Goldratt would immediately recognize as the wrong thing.

The alternative the simulation proposes is explicit measurement of the judgment constraint: decision quality, evaluation depth, system coherence, product-user fit. These metrics are harder to measure precisely, which is why organizations default to the easier proxies. But Goldratt's framework is clear: measure the wrong thing and you will optimize for the wrong thing. The difficulty of measuring judgment does not justify measuring velocity; it justifies the harder work of building measurement systems adequate to the actual constraint. An organization that measures judgment imperfectly is in better shape than an organization that measures velocity precisely — because the first is aiming at the right target, however badly, while the second is aiming at the wrong target, however accurately.

Origin

The critique synthesizes Goldratt's long-standing attack on local-optimization metrics with the specific constraint migration produced by AI. It draws on the Berkeley study's documentation of task seepage and the broader empirical record of AI-augmented work intensification.

Key Ideas

Velocity measures the non-constraint. Story points, sprint velocity, and feature counts measure engineering output, which is no longer the system's binding constraint.

Cost Accounting Critique
Cost Accounting Critique

Local optimization of velocity produces inventory. Features generated faster than they can be evaluated accumulate as cognitive and product inventory — liabilities masquerading as assets.

The measurement framework was built for a prior era. Agile metrics were approximately right when coordination was the constraint; they are systematically wrong now that judgment is.

Judgment metrics are harder but necessary. Decision quality, evaluation depth, and product coherence are difficult to measure, but measuring them badly is superior to measuring velocity precisely.

Organizational culture resists the critique. Velocity metrics are embedded in performance reviews, compensation, and professional identity — making their replacement a political project as much as a technical one.

Further Reading

  1. Eliyahu M. Goldratt, The Haystack Syndrome (North River Press, 1990)
  2. Ryan Singer, Shape Up (Basecamp, 2019) — alternative to sprint-velocity culture
  3. Marty Cagan, Inspired: How to Create Tech Products Customers Love (Wiley, 2017)
Explore more
Browse the full You On AI Encyclopedia — over 8,500 entries
← Home 0%
CONCEPT Book →