The affordance audit is the operational first step of ecological environmental analysis. Rather than evaluating a technology by its features or intended use, the audit maps the full field of offerings the environment structures — salient and hidden, intended and unintended, constructive and destructive. For an AI-augmented workspace, the audit identifies affordances for continued prompting, immediate iteration, breadth of output, the perception of competence, and the absence of natural stopping points. It also identifies the hidden affordances — available but not salient — for pausing, scrutinizing output, engaging directly with material, and disengaging entirely. The audit treats the technological environment the way a field ecologist treats a habitat: as a structured space whose properties determine which organisms can thrive there.
The audit operationalizes Gibson's ecological framework for practical intervention. Without it, discussions of AI tools drift toward abstraction — 'Is AI good or bad?' — that cannot be answered because the relevant facts are specific and ecological. With it, analysis becomes tractable: this specific affordance, in this specific environment, for this specific organism, produces these specific behavioral patterns.
An affordance audit of a typical AI workspace reveals asymmetries. Affordances for production are salient, coupled to primary interface elements, enacted with minimal effort. Affordances for reflection require overriding the salient ones — closing the interface, moving to a different environment, actively declining the continued-engagement offerings the tool presents. The asymmetry is not accidental; it reflects the commercial logic of tools optimized for engagement metrics.
Auditing extends beyond individual tools to aggregate cognitive environments. The contemporary knowledge worker inhabits a stacked affordance landscape — email, chat, AI assistants, notifications, meetings, dashboards — each structuring its own offerings and each interacting with the others. The cumulative affordance structure is often invisible to the workers who inhabit it and to the designers of its individual components. Ecological analysis requires stepping back to map the whole.
The affordance audit as a practical methodology emerges from the intersection of Gibsonian perception theory, usability research (particularly Norman's extensions), and the growing field of attentional ecology. Its systematic application to AI tools is recent, emerging in parallel with empirical work like the Berkeley Study on AI adoption in organizations.
Four components. A full audit examines affordance audit (the environment's offerings), organism profile (perceptual skills brought to the environment), interaction analysis (what actually happens in use), and redesign intervention (structural changes to affordance landscape).
Salience hierarchies matter. Two environments can offer the same actions in principle while producing very different behaviors because the salience ordering differs.
Hidden affordances are findable. An audit can surface affordances that are available but not saliently specified, often as candidates for redesign to increase their perceivability.
Aggregate landscapes matter. Individual tools are components of larger cognitive environments; auditing only the tool misses effects that emerge from stack composition.
Outcomes are measurable. Well-conducted audits produce testable predictions about behavioral patterns in specified populations, making the analysis empirical rather than merely critical.
A debate concerns auditor positioning. Internal audits (conducted by designers or platform owners) risk blind spots and motivated reasoning. External audits face information-access problems. Participatory audits involving users produce richer data but raise methodological questions about sample bias.