Ecologists do not control nature. The pretense to control nature is what got us into most of our problems: wetlands drained, rivers dammed to destruction, apex predators hunted to extinction, all under the assumption that humans could reshape the landscape into something more efficient.
The greatest ecologists succeeded not by controlling but by studying the leverage points, the places where a small intervention cascades through an entire system.
Consider the ecologist’s approach to an invasive species. She does not declare the invader evil and demand its elimination. That demand is both impossible and creates harmful ripples that tear at the very thing she’s trying to protect. When humans have tried to eliminate invasive species through direct force, they can create ecological chaos worse than the invasion.
Instead, the ecologist studies the system. Why is the species succeeding? What niche did it fill? What predators or competitors did it displace? Then, at leverage points, she intervenes. Small. Precise. Systemic. The removal of whatever initially allowed the invasion. The introduction of a natural predator. The modification of habitat at the margins.
Cognitive ecology does not mean the elimination of AI from human environments. That fantasy died in 2025. The intelligence technologies are already integrated into human cognition at every level.
What you can do is study the system.
What happens to judgment when answers are abundant?
What happens to the capacity for boredom, which is neuroscientifically the soil in which attention grows?
What happens to curiosity when curiosity is outsourced?
What happens to undergraduate research when students can generate literature reviews in seconds?
What happens to the capacity for sustained attention when every question can be answered before the question is fully formed?
These are no longer rhetorical questions. They are empirical ones, and they belong to a framework I want to call attentional ecology, or what AI-saturated environments do to the minds that live inside them.
Attentional ecology begins not with the assumption that the goal is to protect humans from technology, but with the observation that humans and technology are already integrated. The organism and the environment cannot be separated. The question is not whether to cohabitate, but how to cohabitate in a way that allows both to flourish. Study their interactions. Observe where they synergize and where they conflict. Then, intervene, not wholesale, but at the points where the intervention will have the most impact and fewest consequences.
Think of your social media feed, optimized through a thousand iterations to maximize engagement, as an invasive species. It accelerates past human cognition's capacity to integrate. The feed moves faster than thought. The problems generated by last week's dopamine hit cannot be digested before this week's arrives.
The system is not evil. It was designed by intelligent people for explicit purposes. But it was designed without considering what speed does to a mind.
The chatbot that answers every student’s question instantly, with high confidence and perfect grammar, is invasive from an attentional ecology perspective too. Not because the chatbot is wrong; often, it is right. But it removes the friction that allows thought to form. A student who receives an uncertain, "I'm not sure, but here's how I'd approach it" is forced to stay in the space of uncertainty, where thinking develops and curiosity lives, for a few more seconds. The instant, confident answer short-circuits the process.
It is convenient. It is also neurocognitively corrosive.
The recommendation algorithm that learns your taste and serves you more of it is invasive because it crowds out the rest of the attentive ecosystem. It optimizes locally, serving each person more of the content they are already engaged with, while ignoring what this does to the distribution of human attention and the implications of the fragmentation of shared reality, the erosion of the other identities to encourage a feeling of sameness and belonging.
A teacher who teaches students to prompt well, then, takes the role of the responsible attentional ecologist. He teaches students to ask better questions and be the architects of their own thoughts, not absentminded purveyors of whatever the machine amplifies. Those students will inherit a world where they are served by AI and inherit the capacity to reshape the frontier to fit their needs.
The question is, when do we need to practice attentional ecology? When do we intervene, and when do we let the ecosystem figure itself out? At what point does the river’s current need to slow? Where do the dams go?
Anyone who works deeply in a domain understands something about that domain that outsiders cannot see. A teacher who has spent twenty years in classrooms understands something about how children learn that no policymaker can replicate from data. A nurse who has spent a decade in emergency rooms understands something about human vulnerability that no hospital administrator can access from a dashboard. A developer who has spent years building systems understands something about how those systems fail that no regulator can anticipate from a compliance checklist.
Understanding confers obligation. If you understand how large language models concentrate attention, you are responsible for how that concentration affects people. If you understand how recommendation algorithms fragment reality, you are responsible for thinking about that fragmentation before you deploy.
This is a distinction modern technologists have almost entirely lost.
I think often about an engineer at a major AI company who foresaw how a system could be misused. She proposed a redesign. She was told her design was "less efficient" and that misuse would be a "user problem." She stayed six months, hoping to change things from within. She could not. She left. The river flowed a little faster downstream.
When I think through the cognitive architecture of a new model, when I see the elegant solution to some architectural problem, there is a rush.
A feeling of power. A feeling of importance, I will admit. I understand something most people do not. I can build things most people cannot. I can see downstream where the river flows and what life it will support.
This knowledge is intoxicating. And it is precisely this intoxication that we must guard against.
Understanding does not make you an authority. It makes you a steward, a custodian, a priest. Priests, in the original sense, are those who tend to something sacred. They understand a domain deeply enough to mediate between that domain and those who do not understand it. And this understanding, at its most optimistic, has always carried obligation. The priest serves because he understands.
We have inherited a priesthood structure without the priesthood ethic. People with deep understanding of complex systems who believe that understanding confers the right to build without accountability. They design the dam. They do not feel obligated to tend to it.
The test of a priesthood is not whether its members feel important. They always do. The test is whether their actions make others more capable. Do they use their knowledge to concentrate power or to distribute it? Do they build walls that protect their understanding or tools that allow understanding to flourish?
I have watched technologists fail this test. AI company leaders accelerating deployment not because the technology was ready but because they feared displacement. Researchers optimizing metrics not because the metrics measured what mattered but because optimizing them meant promotion. Investors funded applications not because they were sound but because they promised disruption.
I have failed this test myself, more than once. But one failure in particular stays with me, and I am going to tell it here because I am encouraging honesty and stewardship, and I cannot ask that of others without offering it myself.
Early in my career, I built a product that I knew was addictive by design. Not in the loose way people use that word now. I understood the engagement loops, the dopamine mechanics, the variable reward schedules, the social validation cycles, the way a notification timed to a moment of boredom could capture thirty minutes of attention that the user had intended to spend elsewhere.
I understood all of these things, and I built it anyway, because the technology was elegant and the growth was intoxicating. I told myself the users were choosing freely. I told myself what every builder tells themselves when the momentum is too compelling to interrupt: Someone else will build it if I do not, so it might as well be me. At least I’ll do it better than they would.
The downstream effects of the work of people like me took years to evolve, and when it did, I was no longer in the room. Users who had intended to spend ten minutes a day on the platform were spending three hours. Teenagers were losing sleep. Parents were finding their children unreachable, not because of rebellion but because of a tool designed to be more interesting than anything a parent could offer.
The engagement metrics were spectacular, and every arrow pointed upward, and inside the fishbowl of growth-stage technology, upward metrics mean you are winning.
For parents, you are the custodian of your child's cognitive development in an environment saturated with technologies designed by people who do not know your child and do not care about them beyond engagement metrics. You cannot protect them from the river. But you can create dams of your own. Mandatory offline time. Spaces for boredom. Conversations that move slowly enough for real thought. Teach them how to build their own.
The government can set some guardrails. The market can reward efficiency. But only people who understand these systems from the inside, who know not just what they do but why they do it, what assumptions underlie them, what could go wrong in ways that are not obvious from the outside, can tend to them with the granular, continuous attention that keeps the dams in place. Anthropic, the company that brought us Claude Code was founded on this premise.
AI is generous. Not sentimentally; it does not care about you, not directly. But it is generous in the way rain is generous: It falls without discrimination, on everything and everyone. And this indiscriminate generosity has a strange property in that it can make things grow and it can make banks overflow and it can cultivate and flood and nourish and destroy. It holds both capacities, and what it brings depends not just on the severity of the storm but what you did to prepare for it.
Carelessness is amplified. So too is thoughtfulness.
The priesthood, the attentional ecology, the dams, the practice of asking “should I?” before “can I?”, all of it comes down to this: The tool does not choose. You choose. And the quality of your choices is the only thing that separates building from flooding, just as it always has. Much has changed, but that raw and terrifying truth remains.
Tend the dam. Maintain it. Ask how it could be better, and act on the answer. Choose to take responsibility for what’s to come, and encourage others with understanding to do the same.
The ecosystem downstream depends on it.