Engelbart's framework is unfinished in a specific and consequential sense. What he completed — the distinction between augmentation and automation, the formalization of the human-tool system as the unit of analysis, the bootstrapping principle, the capability hierarchy, the H-LAM/T decomposition — constitutes the intellectual foundation. What he left unfinished falls into five categories: augmentation culture (organizational norms that value depth over speed), augmentation pedagogy (educational institutions that develop direction skills rather than execution skills), augmentation governance (regulatory frameworks addressing the demand side rather than only the supply side), augmentation measurement (metrics capturing qualitative outcomes), and augmentation philosophy (the articulation of why augmentation should be preferred over automation). The tools are ready. The structures that would ensure they serve augmentation have barely begun to be built.
Engelbart understood that augmentation is a cultural phenomenon as much as a technological one. The technology provides capability; the culture determines whether the capability is used for genuine augmentation or channeled into automation by structural forces. A culture that values depth over speed, judgment over output, development over production supports augmentation. The dominant culture of the technology industry values the reverse, and the reverse undermines augmentation regardless of how the tools are designed.
The pedagogical gap is structural, not additive. It is not a matter of adding AI literacy to existing curricula. The entire orientation of professional education — toward execution competence, toward mastery of specific tools — must shift toward direction competence: the capacity to evaluate, to choose wisely, to ask the questions that determine whether a capability is used well.
The governance gap is asymmetric. The regulatory frameworks currently being developed address the supply side: what AI companies may build, what disclosures they must make. The demand side — what citizens, workers, students, and organizations need to navigate the transition — remains almost entirely unaddressed. The regulatory attention is focused on preventing harms; the governance attention needed to promote augmentation is absent from the institutional agenda.
The measurement gap is values-based. An organization that measures throughput values throughput. An organization that measures capability development values capability development. The metrics are not neutral instruments; they are active forces that shape behavior. The choice of metrics is a choice of trajectory, and the trajectory choice is being made now, during initial AI deployment, when the metrics adopted will harden into institutional habits that become progressively resistant to change.
Engelbart acknowledged the incompleteness explicitly in his later years. His remark that the tangible parts of his system had "evolved so spectacularly" while the intangible parts had "still not evolved" — reported by Howard Rheingold — is the canonical statement of what the industry adopted and what it ignored.
Foundation is sound. The distinction, the system, the bootstrapping principle, the capability hierarchy — these are complete and robust.
Culture remains unbuilt. The organizational norms that would support augmentation have not developed at scale.
Pedagogy is misaligned. Educational institutions train for an economy that is being automated away.
Governance is asymmetric. Regulation addresses supply-side harms while ignoring demand-side capability development.
Measurement is the steering wheel. What organizations measure determines what they optimize, and current metrics systematically underweight augmentation outcomes.
The philosophical case must be made. Augmentation over automation is a values choice that must be argued, not assumed.