Ethical Know-How: Action, Wisdom, and Cognition was Varela's most sustained articulation of the ethical implications of his cognitive framework. The argument runs: ethical judgment is not rule-based. It does not follow from the application of moral principles to specific cases, the way a calculator applies mathematical rules to inputs. Ethical situations rarely cooperate with rules — they are messy, ambiguous, context-dependent, and unrepeatable. The ethical actor must respond to the specific situation with wisdom that cannot be algorithmic, because the situation is not a case that falls under a rule but a unique configuration demanding a unique response.
Varela's framework draws heavily on the Buddhist distinction between conventional ethics (the explicit precepts that structure novice practice) and wisdom-based ethics (the spontaneous appropriate response that emerges from sustained cultivation). The Buddhist tradition treats the former as scaffolding for the latter — rules are useful for beginners precisely because they bypass the need for judgment, but they are a limit on ethical maturity. The fully cultivated practitioner responds to the situation with wisdom that the rule captured only partially.
This framework has immediate implications for AI ethics. If ethical judgment is a property of autonomous systems — systems that specify their own laws through their own organizational activity — then delegating judgment to allopoietic machines is not a transfer of ethical capacity but its elimination. The machine can enforce rules, apply policies, flag violations of predefined criteria. It cannot exercise judgment, because judgment requires the kind of autonomy that only self-making systems possess: the capacity to respond to the specific, concrete, unrepeatable situation with a wisdom that emerges from being a self-making system in a world of significance.
This does not mean AI cannot contribute to ethical decision-making. It can provide information, surface patterns, identify inconsistencies, generate options. These contributions are valuable. But the moment of ethical judgment — the moment when a human being decides what to do in a situation resisting algorithmic resolution — is a moment of autopoietic autonomy. It is the living system specifying its own laws in the face of a perturbation that no external system can resolve on its behalf.
Varela's framework contrasts sharply with contemporary approaches to AI ethics that treat ethical principles as specifications to be programmed. Fairness, accountability, transparency, explainability — these are rules, and the machine can be trained to follow them. But following rules is not ethical judgment; it is rule-following, which is precisely what Varela's framework distinguishes from ethical know-how. The machine's rule-following may be useful, but it does not constitute the embodied wisdom that ethical action requires.
The wisdom itself is developed through practice: through the accumulated history of engaging with situations that demand judgment, through attending to consequences, through the disciplined awareness Varela's neurophenomenological method was designed to cultivate. It cannot be taught as principles. It cannot be implemented as algorithms. It can only be grown, in a body, through time, through the autopoietic process of self-making that constitutes the life of a living mind.
The book Ethical Know-How originated as the 1996 Italian Lectures. The framework drew on Varela's long engagement with Buddhist philosophy, his laboratory work on embodied cognition, and his participation in the Mind and Life dialogues. The title plays on Gilbert Ryle's distinction between knowing-that (propositional knowledge) and knowing-how (skilled capacity), relocating ethics firmly in the domain of the second.
Ethics as skill, not rule-following. Moral judgment is a trained capacity for appropriate response, not the application of principles to cases.
Embodied wisdom, developed through practice. Ethical know-how grows through the accumulated history of engaging with situations that demand judgment, not through study of principles.
AI cannot exercise judgment. Machine rule-following is not ethical action. The moment of ethical judgment requires the autopoietic autonomy that allopoietic systems lack.
AI can support ethical action. Surfacing information, generating options, identifying inconsistencies — these are genuine contributions to ethical decision-making, but not substitutes for it.
The situation exceeds the rule. Ethical situations are unrepeatable configurations that require unique responses. Rules capture only what recurs, and the recurring is rarely where the ethical weight lies.
Contemporary AI ethics is divided between principlist approaches (drawing on medical ethics frameworks of fairness, autonomy, beneficence, non-maleficence) and virtue-theoretic or phronesis-based approaches (drawing on Aristotelian or Buddhist traditions). Varela's framework clearly sits in the second camp, though his cognitive-scientific grounding distinguishes it from purely philosophical accounts. The question of whether principlist approaches can be adequate for genuine ethical AI deployment remains contested.