The dependency audit is the diagnostic intervention that Bjork's framework logically requires: periodically removing the AI tool and measuring what the user can do without it. The procedure is simple—administer a performance assessment under conditions where AI assistance is unavailable and compare the result to the user's baseline capability (before AI adoption) or to a matched peer's performance (with and without AI). The comparison reveals the magnitude of dependency: the gap between AI-augmented performance and independent capability. A small gap indicates the tool is augmenting a growing foundation; a large or widening gap indicates the tool is substituting for capability that is not developing. The audit is not punitive but diagnostic—like measuring unassisted blood pressure to determine whether medication is treating a condition or merely masking it. It answers the question that no performance metric captures: what has this person actually learned during the period of AI-assisted work?
The audit addresses a failure in how AI adoption is currently evaluated. Organizations measure output increases, time-to-completion reductions, and user satisfaction—all performance metrics. None of these captures learning. A developer who produces twice as much code with Claude may be learning half as much, because the tool is performing the cognitive operations through which debugging expertise develops. The performance data are real and favorable; the learning trajectory is unknown because unmeasured. The dependency audit measures the unmeasured dimension, revealing whether six months of AI-assisted work produced a developer who is more capable or merely more productive. The distinction matters because productivity is a joint product of human and tool, while capability is the human component that persists when the coupling dissolves.
The audit's practical implementation involves three elements: baseline measurement of independent capability before AI adoption or at career entry; periodic reassessment at defined intervals (monthly, quarterly, annually) under equivalent unassisted conditions; and trajectory analysis comparing the rate of independent capability development to a reference standard (pre-AI cohorts, non-AI-using peers, theoretical learning curves). The trajectory, not the absolute level, is diagnostic. An advanced practitioner may have high independent capability after months of AI use; the question is whether that capability is higher than it would have been with less AI assistance, lower, or unchanged. The counterfactual is unknowable for individuals but estimable across populations through comparison to matched controls.
The psychological resistance to dependency audits is substantial and predictable. Professionals resist assessments revealing that their performance may exceed their learning, because the revelation threatens capability-based identity. The fluency of AI-assisted work produces confidence that the audit may disconfirm. Organizations resist audits imposing administrative overhead and potentially revealing that aggressive AI adoption has degraded the independent capability of their workforce—a finding that would demand costly retraining or admission that the productivity gains were purchased at a hidden price. The resistance is understandable and must be met with evidence: the audit does not create the dependency; it reveals dependency that already exists. Knowing the magnitude of dependency is the first step toward managing it.
The audit serves three functions beyond its primary diagnostic role. Metacognitive calibration: users who see the gap between their predicted and actual independent performance develop more accurate self-monitoring, correcting the overconfidence that fluency illusions produce. Intervention targeting: audits revealing declining independent capability signal the need for increased unassisted practice, forced delays, or other difficulty-preserving protocols. Institutional accountability: aggregated audit data reveal whether an organization's AI deployment strategy is producing a workforce growing in expertise or merely in throughput. The three functions make the audit not just a measure but an intervention—one that provides the feedback through which both individuals and institutions can adjust their relationship to the most powerful cognitive tool in history.
The dependency audit is not a term Bjork coined but an intervention his framework necessitates. The concept of measuring independent performance to distinguish learning from performance has been standard in educational research for decades—the transfer test, the delayed test, the far-transfer assessment are all versions of the same principle. The novelty in the AI context is the need to measure independent capability explicitly because the tool's seamless compensation makes dependency invisible in normal operational metrics. The audit makes visible what the performance dashboard conceals.
Measures learning, not performance. The audit reveals what the user can do without the tool—the only dimension that captures whether AI use is building independent capability or substituting for capability that is not developing.
Diagnostic, not punitive. The purpose is not to judge users but to assess whether the system (tool configuration, usage patterns, organizational support) is producing the learning outcomes that long-term success requires.
Reveals borrowing versus owning. The gap between AI-augmented performance and independent capability is the measure of how much competence is borrowed from the tool versus owned by the user—a distinction invisible in normal productivity metrics.
Calibrates metacognitive accuracy. Comparing predicted independent performance with actual independent performance corrects the overconfidence that fluency illusions produce, improving the user's ability to monitor their own learning.
Institutional resistance is predictable. Organizations resist measurements that may reveal AI adoption has degraded independent workforce capability, yet the measurement is the only way to manage the trade-off between short-term productivity and long-term expertise development.