The question appeared quietly in Salk's later work and has outlived almost everything else he wrote after the vaccine. It asks every person making every consequential decision to imagine the judgment of those who will inherit the consequences — not a divine judge, not a moral philosopher, but their grandchildren. Salk arrived at the question through biology rather than philosophy, observing that every organism persisting over evolutionary time does so because it serves not only its own survival but the survival of the system within which it is embedded. A generation optimizing only for itself, without regard for the generations to follow, behaves like a cancer within the body of the species — not out of malice but out of a failure of imagination. The question has become the most widely-quoted ethical instrument for evaluating technology's long-term impact.
The question operates at a different timescale than the one governing most human decision-making. Markets operate on quarters. Elections operate on cycles of two to four years. Corporate strategy operates on five-year plans. Salk's question asks for evaluation across generations — across fifty, a hundred, two hundred years. It asks the decision-maker to take seriously the interests of people who do not yet exist and cannot advocate for themselves.
The human brain is poorly equipped for intergenerational thinking. Its reward circuits respond to immediate outcomes; its threat-detection systems are calibrated for proximate dangers; its social instincts extend reliably to kin and tribe, unreliably to strangers, and almost not at all to humans not yet born. Asking the brain to optimize for future generations is asking it to perform an Epoch B function using Epoch A hardware.
Applied to AI, the question produces specific and uncomfortable challenges. The educational systems being redesigned around AI will shape the cognitive capacities of children making decisions in 2060 and 2080. The economic structures being rebuilt around AI will determine the distribution of resources, opportunity, and power for generations. The cultural norms being established around AI will become the water in which future generations swim, as invisible and inescapable as the water in any fishbowl.
The question converges with contemporary work on responsibility for the not-yet-born (Jonas), seventh-generation principles (Haudenosaunee), and civilizational intelligence frameworks. What distinguishes Salk's formulation is its brevity: five words that a child can understand and that no philosopher can fully answer.
The phrase appeared in Salk's writings and talks during the 1980s and was delivered most memorably in his October 1985 testimony before a U.S. Senate subcommittee on biomedical research. Expected to discuss funding and institutional mechanisms, Salk departed from the script to deliver the question that would outlive his more technical work: Are we being good ancestors?
The question has since been adopted by environmentalists, ethicists, indigenous leaders, sustainability theorists, and, increasingly, technologists grappling with AI's long-term implications. It has outlived the books in which it appeared because it does something no technical argument can do: it reorients the entire frame of evaluation from the present to the future.
The question reframes without prescribing. It does not specify what to do — it specifies from whose perspective to evaluate what is being done.
Biological rather than philosophical grounding. Salk arrived at the question through observation of how living systems persist, not through ethical theorizing.
The cancer parallel. A generation optimizing only for itself is, in the precise biological sense, behaving like a malignancy within the body of the species.
The constituency cannot speak. Future generations have no representatives, no votes, no lobbyists — their interests exist only to the extent the present generation chooses to imagine them.
The question applies to tools, not only to decisions. What kind of ancestors we are is determined in part by what kind of amplifiers we build and how we deploy them.
Some contemporary philosophers have criticized the good ancestor framework as generating a paralysis of action — if every decision must be evaluated against consequences extending centuries forward, no decision can ever be made. Defenders respond that the question is a corrective rather than a replacement: it does not eliminate short-term evaluation but adds a necessary long-term dimension that the present generation's cognitive architecture would otherwise suppress. Others have raised the opposite concern — that the question is too easily co-opted by corporate communications as rhetoric divorced from structural reform.