By Edo Segal
The thing nobody prepared me for was not the tool. It was the stranger in the mirror.
I have described the week in Trivandrum many times now — the twenty-fold productivity multiplier, the engineers reaching across domain boundaries they had never crossed, the backend specialist who built her first user interface in two days. The metrics are real. The transformation was measurable. But the metric I keep returning to is the one I could not measure: the look on the senior engineer's face when he realized that the thing he had spent twenty years becoming was no longer the thing the world most needed him to be.
That look was not about skills. He had the skills. He could learn Claude Code in a week. The look was about something deeper — the slow-dawning recognition that his professional identity, the story he told himself about who he was and why his work mattered, had cracked open. Not shattered. Cracked. And through the crack, he could see a possible self he had never considered, one that was arguably more valuable, but was not the self he had built.
I recognized that look because I had been wearing it myself. The months of building with Claude — the thirty days before CES, the flight over the Atlantic where I could not close the laptop — those were not just productivity stories. They were identity experiments. Each one tested a version of me that did not exist before the tools arrived. And the hardest part was never the technology. It was the internal negotiation between who I had been and who I might become.
Every framework in this series offers a lens. Herminia Ibarra offers the one I wish I had found first. Not because her ideas are more important than the others, but because her work addresses the layer of the transition that every other framework assumes has already been resolved: the question of who you understand yourself to be when the ground shifts.
Skills can be retrained in a sprint. Identity cannot. It follows a slower, messier, more human timeline — one governed by experimentation, reflection, and the willingness to sit in the discomfort of not yet knowing who you are becoming. Ibarra mapped that timeline with clinical precision, and her map is the most practically useful thing I can hand to anyone standing where that engineer stood: staring at a tool that offers a promotion they never applied for, wondering whether to accept it or run.
The pages that follow translate her framework into the language of this moment. They will not make the transition comfortable. They will make it legible. And legibility, when the ground is moving, is the difference between vertigo and navigation.
-- Edo Segal ^ Opus 4.6
1963-present
Herminia Ibarra (born 1963) is a Cuban-American organizational behavior scholar and professor at London Business School, where she holds the Charles Handy Chair in Organizational Behaviour. She previously held a chaired professorship at INSEAD for over two decades. Ibarra received her PhD from Yale University and has built a body of research centered on professional identity, career transitions, and leadership development. Her most influential books include *Working Identity: Unconventional Strategies for Reinventing Your Career* (2003) and *Act Like a Leader, Think Like a Leader* (2015), both of which argue that identity change proceeds through action and experimentation rather than introspection and planning — a reversal of the conventional career-counseling wisdom. Her key concepts include "possible selves" as testable identity hypotheses, "outsight" as the knowledge gained from new experiences rather than self-reflection, the "competency trap" that locks experts into obsolescing identities, and "provisional identities" as the temporary selves people try on during transitions. Her work has appeared in the *Harvard Business Review*, *Administrative Science Quarterly*, and the *Academy of Management Review*, and she is consistently ranked among the most influential management thinkers in the world by Thinkers50.
In February 2026, a senior engineer sat in a room in Trivandrum, India, and watched a tool do in minutes what had taken him weeks. The tool was Claude Code. The work was backend infrastructure — dependency management, configuration, the connective tissue of software systems that he had spent twenty years learning to build by hand. He had apprenticed in this craft. He had failed at it thousands of times. He had developed, through those failures, an intuition so refined that colleagues described it as architectural instinct — the ability to feel when a system was wrong before he could articulate why.
The tool did not have his intuition. It did not need it. It produced working code, quickly, accurately, and without the years of formative struggle that had deposited his expertise layer by layer into his professional identity. By Wednesday of that training week, the engineer had stopped oscillating between excitement and terror and arrived at something harder to name. Not fear, exactly. Not grief. Something closer to the disorientation of waking up in a familiar room and finding the furniture rearranged.
The conventional reading of this moment is a skills story. The engineer needed to learn a new tool. He needed to update his competencies, retrain his workflow, adapt his methods to a new technological paradigm. This reading is not wrong, but it is shallow in a way that misses the actual difficulty of what was happening to him. Herminia Ibarra's three decades of research on professional reinvention suggest a fundamentally different diagnosis: the engineer was not experiencing a skills crisis. He was experiencing an identity crisis. And identity crises follow a logic that no tool can accelerate, because the resistance they encounter is not cognitive but existential.
Ibarra's central contribution to the study of career transitions is the distinction between what people do and who they understand themselves to be. A job description is a list of tasks. A professional identity is a story — the narrative a person constructs, over years and through thousands of small experiences, about why their work matters, how their expertise was earned, and what it means to be the kind of professional they have become. The job description can change overnight. The identity resists change with a stubbornness that no amount of retraining can overcome, because the identity is not a skill to be updated. It is a self to be mourned, dismantled, and reconstructed.
The senior engineer in Trivandrum had not merely learned backend infrastructure. He had become a backend infrastructure person. The expertise was woven into how he introduced himself at conferences, how he mentored junior colleagues, how he evaluated his own worth on difficult days. When he looked in the mirror — professionally speaking — the reflection was shaped by twenty years of solving problems that AI could now solve without him. The question that confronted him was not the practical one ("What do I do now?") but the existential one ("Who am I now?"). And that question, as Ibarra's research demonstrates with uncomfortable consistency, cannot be answered through reflection alone.
This is the first and most important distinction this chapter needs to establish: the difference between a skills transition and an identity transition. Skills transitions are amenable to training. You learn the new tool, you practice with it, you develop proficiency. The timeline is weeks or months, and the process, while demanding, follows a legible trajectory. You know where you are going. You can measure your progress. The destination is visible from the starting point.
Identity transitions are different in kind, not merely in degree. Ibarra's research, drawn from hundreds of in-depth case studies of professionals undergoing career changes, reveals that identity transitions do not follow a linear path from old self to new self. They follow a messy, recursive, emotionally taxing path through a landscape she calls liminal space — the territory between who you were and who you might become, where neither identity is fully operative and the ground underfoot is genuinely uncertain. The skills can be updated in a sprint. The identity requires something more like a slow migration, and the migration cannot be compressed by the same tool that compressed the implementation work.
Consider what the engineer had actually lost. Not his job — Edo Segal, who ran the training, was explicit that the team would be kept and grown. Not his technical knowledge — his architectural instincts remained as sharp as ever, and in fact they mattered more now that AI handled the mechanical work, because someone still needed to evaluate whether the AI's output was architecturally sound. What he had lost was the daily practice through which his identity was confirmed. Every hour of implementation work — debugging, configuring, managing dependencies — was simultaneously an hour of identity maintenance. Each problem solved said, silently but powerfully: you are the person who can do this. This is what you are for.
When the tool took over the implementation, the identity-maintenance mechanism broke. The expertise remained. The daily confirmation of the expertise vanished. And without that confirmation, the identity began to float, untethered from the practice that had anchored it for two decades.
Ibarra would recognize this pattern instantly, because it is the same pattern she has documented in investment bankers who become nonprofit leaders, in corporate lawyers who become entrepreneurs, in academics who leave the university for the private sector. In every case, the hardest part of the transition is not learning the new work. It is letting go of the old identity — the identity that was not merely a label but a lived experience, reinforced thousands of times through the daily practice of being a particular kind of professional.
The technology industry has responded to the AI disruption almost entirely as a skills problem. The solution, according to the dominant narrative, is retraining: learn prompt engineering, learn to collaborate with AI, learn the new tools and frameworks. Ibarra's work suggests this response is necessary but radically insufficient, because it addresses the surface layer of the transition while leaving the deeper layer — the identity layer — unacknowledged and therefore unmanaged.
A skills training program teaches people what to do differently. An identity transition requires people to understand themselves differently. The difference is not semantic. It is structural, and it shows up in observable behavior. The engineer who has been retrained in AI tools but has not undergone an identity transition will use the tools reluctantly, defensively, as an unwelcome addition to a workflow he still understands in the old terms. He will check the AI's work not with the productive skepticism of a judgment-layer professional but with the anxious suspicion of someone who believes the tool is an interloper in his domain. He will, in Ibarra's terminology, be carrying out the new tasks with the old identity — and the mismatch will produce friction, resentment, and a quality of work that falls well short of what the tools make possible.
The engineer who has undergone an identity transition — who has reconstructed the story of who he is from "the person who writes the infrastructure" to "the person who designs the system and evaluates whether the infrastructure serves it" — will use the same tools with a fundamentally different relationship to the work. The tools become extensions of his judgment rather than competitors for his role. The relationship shifts from adversarial to collaborative, and the shift is visible not in the metrics but in the posture: the way he leans toward the screen rather than away from it, the way his questions become generative rather than defensive.
Ibarra's research predicts, and the Trivandrum experience confirms, that the timeline for this shift is not the timeline of skills acquisition. Skills can be acquired in a week of intensive training. Identity transitions, in Ibarra's studies, take months to years — and they do not proceed in a straight line. They proceed through what she calls identity experiments: small, tentative forays into a possible new self, each one providing data about whether the new identity fits, each one followed by a period of reflection and adjustment.
The twenty-fold productivity multiplier measured in Trivandrum was a skills metric. It measured how much more the engineers could produce with AI tools. It did not measure how much their sense of professional self had shifted, or whether the shift was deep enough to sustain the new way of working once the exhilaration of the training week faded and the engineers returned to the daily reality of their jobs.
Ibarra's framework offers a prediction: the engineers who were most productive during the training week will not necessarily be the ones who thrive in the long term. The ones who thrive will be the ones who use the training as the beginning of an identity experiment rather than its conclusion — who continue to test the new self, who tolerate the discomfort of not yet knowing who they are becoming, and who build, over months, a new working identity that integrates their old expertise with their new capabilities.
The prediction carries a corollary that is harder to hear: some of the engineers will not make the transition. Not because they lack talent or intelligence. Because the identity they have built is too load-bearing to dismantle without crisis. They have too much invested in being the person who writes the code by hand. The prospect of becoming a person who directs a tool to write the code — even though the new role is, by any objective measure, higher-level and more strategically valuable — feels like a demotion rather than a promotion, because the old identity valued the craft of writing, and the new role does not require it.
This is not a failure of the individual. It is a structural feature of how identity works. Ibarra's research shows that the depth of investment in an existing identity is the single strongest predictor of how difficult the transition will be. The more years of practice, the more hard-won the expertise, the more deeply the identity is embedded in daily habits and professional relationships, the more the transition feels like a loss rather than a gain — regardless of the objective merits of the new position.
The Luddites of Nottinghamshire, whom The Orange Pill describes with sympathy and precision, were not technophobic. They were identity-trapped. Their working identities were so thoroughly constructed around the specific, skilled, embodied practice of hand-weaving that the arrival of the power loom was not merely an economic threat but an existential one. The machine did not just take their jobs. It made their selves illegible. The thing they were, the thing they had spent years becoming, no longer had a place in the world's vocabulary.
Two centuries later, the senior engineer in Trivandrum faces a structurally identical challenge, rendered in silicon instead of cotton. The specific skills are different. The identity dynamics are the same. And the resolution, as Ibarra's work makes clear, will not come from retraining alone. It will come from the slow, experimental, often painful process of constructing a new working identity from the materials of the old one — a process that cannot be outsourced, cannot be accelerated by the very tool that triggered it, and cannot be skipped without consequences that show up months or years later as disengagement, burnout, or quiet departure.
The industry's failure to recognize this distinction — between the skills problem it is addressing and the identity problem it is ignoring — is, in Ibarra's framework, entirely predictable. Organizations default to skills language because skills are legible, trainable, and measurable. Identity is none of these things. You cannot write a training curriculum for "reconstruct your sense of professional self." You cannot measure progress on a dashboard. The work happens in the interior — in the stories people tell themselves about who they are and why they matter — and it is invisible to every metric the organization tracks.
But invisible is not the same as absent. The identity transition is happening whether organizations acknowledge it or not. It is happening in the senior engineer's hesitation before delegating a task to AI. In the junior developer's quiet confusion about whether her AI-assisted accomplishments "count." In the team lead's struggle to evaluate performance when the unit of contribution has shifted from code written to judgment exercised. In the parent's inability to answer a child's question about whether their homework still matters.
Every one of these moments is an identity moment. Every one is a crack in the working self, a fissure through which a new identity might emerge — or through which the old identity might collapse without replacement.
The rest of this book is about how to navigate the transition. Not the skills transition — that is well-served by existing resources. The identity transition. The one that matters more, takes longer, and has almost no institutional support.
Ibarra's research provides a map. Not a plan — plans, as the next chapter will argue, are precisely the wrong tool for identity transitions. A map. A description of the terrain, drawn from decades of watching people cross it. The terrain is difficult. The crossing is disorienting. And the person who emerges on the other side is not the person who entered — which is, simultaneously, the hardest part and the entire point.
In 1986, the psychologists Hazel Markus and Paula Nurius introduced a concept that would quietly reshape the study of human motivation. They called it "possible selves" — the cognitive representations of who a person might become in the future. Not fantasies. Not daydreams. Working hypotheses about future identity, grounded enough to influence present behavior. The possible self who runs a company. The possible self who speaks fluent Mandarin. The possible self who writes novels on weekends. Each one exerts a gravitational pull on current decisions, shaping what risks feel worth taking and what efforts feel worth sustaining.
Ibarra adopted this concept and gave it operational teeth. In her framework, possible selves are not abstract motivational structures. They are testable propositions about identity. The backend engineer who imagines building user interfaces holds a possible self. That possible self influences whether she signs up for a frontend workshop, whether she volunteers for a cross-functional project, whether she spends a Saturday afternoon experimenting with CSS. The possible self creates the conditions for an identity experiment — a small, reversible foray into a future version of the professional self.
Before artificial intelligence, the distance between a possible self and a testable provisional identity was measured in years. The backend engineer could imagine building interfaces. Getting there required months of study, practice, the accumulation of a second body of technical knowledge layered on top of the first. The imagination-to-artifact ratio — the distance between an idea and its realization — was large enough that most possible selves remained precisely that: possible but untested. The gravitational pull existed, but the orbital distance was too great for contact.
AI collapsed the orbit. The backend engineer who, with Claude Code, builds a complete user-facing feature in two days has not merely learned a new technical skill. She has made physical contact with a possible self that was, until that moment, theoretical. She has worn the identity of "someone who builds interfaces." She has experienced what it feels like to think about users, to make aesthetic choices, to solve problems that require a different kind of attention than the systems work she knows. The possible self has become, however briefly, a provisional self — an identity inhabited rather than merely imagined.
This is new. Not the concept of possible selves — Markus and Nurius described it four decades ago. Not the concept of identity experiments — Ibarra documented them across hundreds of career transitions. What is new is the speed at which the experiment can be conducted. The distance between imagining a possible self and testing it has shrunk to the length of a conversation with an AI tool. The gravitational field has intensified to the point where possible selves are no longer distant orbiting objects but colliding particles, each one generating data about fit, interest, capability, and the shape of a future professional self.
The acceleration is genuinely generative. Ibarra's research has consistently shown that the primary obstacle to successful career transition is not the absence of good options but the inability to test them. People get stuck not because they lack imagination but because the cost of experimenting — in time, in money, in social risk — is too high. A corporate lawyer who wonders whether she might be happier as an entrepreneur faces a formidable experimental barrier: she cannot easily start a company on a Tuesday afternoon to see whether the identity fits. She would need to quit her job, sacrifice income, disrupt her family, endure the raised eyebrows of colleagues — all before she has any data about whether the possible self is viable.
AI has not eliminated all these barriers. It has, however, eliminated the most fundamental one: the barrier of capability. The lawyer can now prototype her business idea over a weekend. She can build a landing page, test a value proposition, create a financial model, generate the minimal viable version of the product she has been thinking about for years. None of this was possible without AI assistance unless she already possessed the technical skills — or the capital to hire someone who did. The experiment that would have cost months of preparation and thousands of dollars now costs a weekend and a subscription.
The implications for the distribution of identity experimentation are profound. Before AI, the space of testable possible selves was gated by resources. The professionals who could afford to experiment — who had savings, networks, institutional support, proximity to capital — were disproportionately privileged. The developer in Lagos, whom The Orange Pill describes, had ideas and intelligence and ambition. What she lacked was the infrastructure to test her possible selves. The distance between imagination and artifact was not a cognitive gap. It was an access gap. AI has narrowed it, not to zero — connectivity, hardware, language barriers remain real — but enough that the population of people who can conduct meaningful identity experiments has expanded dramatically.
This is the democratization argument as identity theory. Ibarra's framework reveals that what is being democratized is not merely capability — the ability to build things — but the ability to explore who you might become. The possible self who builds products, who designs systems, who analyzes markets — these were, until recently, possible selves available only to people with the training or resources to test them. Now they are available to anyone with access to the tool and the willingness to experiment. The expansion of who gets to try is also an expansion of who gets to discover, and the discovery, in Ibarra's framework, is the discovery of identity: who you actually are when the barriers between you and your possible selves come down.
But Ibarra's research carries a warning that is as important as the promise, and the warning intensifies in direct proportion to the speed of experimentation. The warning is about integration.
In every career transition Ibarra has studied, the identity experiment is only half the developmental process. The other half is integration — the slow, reflective work of incorporating the experiment's results into a coherent, evolving sense of self. The experiment generates data. Integration generates identity. The two processes operate on different timescales and require different resources. The experiment requires action, energy, the willingness to try. Integration requires something harder to produce and harder to measure: time to sit with what happened, to evaluate not just whether the experiment succeeded in practical terms but whether the identity it tested fits — whether it feels like a version of yourself you want to develop further or one you want to set aside.
Before AI, the pace of experimentation naturally allowed for integration. The months of study required to acquire a new skill provided, inadvertently, months of reflection time. The slow deposition of understanding that The Orange Pill describes as a geological process — each hour of struggle adding a thin layer to the foundation of expertise — was simultaneously a slow deposition of identity. Each layer of understanding was also a layer of self-knowledge. By the time the engineer had learned to build interfaces through months of practice, she had also developed an increasingly refined sense of whether "interface builder" was an identity she wanted to inhabit. The skill and the identity developed in parallel, at the same pace, through the same process.
AI has decoupled these two timelines. The skill can now be acquired — or at least approximated — in hours. The identity cannot. The backend engineer who builds a feature in two days has produced a working artifact, but she has not had time to process what the experience means for her sense of professional self. She has visited a possible self. Whether she has begun to develop that possible self into something durable depends on what happens in the days and weeks that follow — whether she returns to the experiment, whether she reflects on what she learned, whether she integrates the experience into the ongoing narrative of who she is becoming.
Ibarra's research suggests that without this integration, the experiment produces what might be called a false positive — the sense that a new identity has been established when it has merely been sampled. The engineer feels the excitement of having built something new. The excitement is genuine. But excitement is not identity. Identity is forged through repeated engagement, through the accumulation of experiences that collectively confirm: this is who I am. A single experiment, no matter how successful, does not produce this confirmation. It produces a data point. Confirmation requires a pattern.
The risk of the AI age, from an identity-development perspective, is that the abundance of data points substitutes for the pattern. The professional who can try on a new possible self every week — designer on Monday, analyst on Tuesday, product strategist on Wednesday — accumulates experiences at a rate that outpaces the reflective capacity to evaluate them. Each experience is real. Each one generates genuine information about a possible self. But the information is never integrated, because the next experiment begins before the last one has been processed.
This produces a condition Ibarra's framework would describe as identity diffusion — a state of having many possible selves in play and no provisional self in development. The person is rich in experience and poor in identity. The breadth of experimentation has expanded enormously, but the depth of any single experiment has not, because depth requires repetition, and repetition requires choosing to return to the same possible self when a dozen new ones are beckoning.
The comparison to travel is imperfect but illuminating. A person who visits twelve countries in twelve months has seen a great deal. A person who lives in one foreign country for a year has understood something. The visitor accumulates experiences. The resident develops identity — the specific, hard-won understanding that comes from staying long enough for the novelty to wear off and the reality to set in.
AI makes visiting extraordinarily easy. Living still requires the commitment that no tool can provide.
Ibarra's prescription for this condition is not to slow down the experimentation — the abundance of possible selves is genuinely valuable, and the democratization of access to them is genuinely important. The prescription is to build structures that support integration alongside experimentation. Structured reflection after experiments. Deliberate return to possible selves that showed promise, rather than constant movement to new ones. Conversations with trusted others about what the experiments revealed — not just about whether the output was good, but about whether the experience felt like a version of the self worth developing.
These structures do not arise naturally in an environment optimized for speed. The AI ecosystem rewards iteration, output, visible productivity. It does not naturally reward the quiet, interior work of asking: "Of all the things I could be, which ones do I actually want to become?" That question requires a different kind of attention than the attention the tool demands. It requires the willingness to stop experimenting long enough to know what the experiments meant.
The possible selves that AI has made accessible are an extraordinary gift. Never in the history of professional life have so many people been able to test so many visions of who they might become. The question that Ibarra's framework poses with clinical precision is whether the testing will be accompanied by the reflection that turns experiments into identity — or whether the speed of the testing will overwhelm the capacity for reflection, producing a generation of professionals who have tried everything and become nothing in particular.
The answer is not determined by the tool. It is determined by the person, and by the structures — organizational, social, personal — that either support or undermine the reflective capacity that genuine identity development requires. The next chapter examines why the most common advice for navigating this transition — plan first, then act — is precisely the wrong approach, and why Ibarra's reversal of the sequence is more urgent now than at any point since she first proposed it.
The career counselor's office, whether physical or virtual, has operated on a stable set of assumptions for half a century. The process goes like this: First, reflect. Take a battery of assessments. Identify your values, your strengths, your personality type. Map the results against the landscape of available careers. Locate the intersection of who you are and what the world needs. Define a target. Build a plan. Execute the plan.
Reflect, then plan, then act. The sequence sounds responsible. It sounds rational. It appeals to the part of us that believes important decisions should be made deliberately, with full information, after careful analysis. The career counselor sells certainty: Follow this process and you will arrive at the right answer. The right career. The right identity.
Ibarra's research, accumulated across decades and hundreds of case studies, has demonstrated that this sequence is not merely suboptimal. It is backwards. And the AI revolution has made its backwardness not just intellectually interesting but practically dangerous, because the people who follow the conventional advice — who try to figure out who they want to become before they begin experimenting — will find themselves paralyzed at precisely the moment when paralysis is most costly.
The failure of the plan-then-act model is rooted in a philosophical error about the nature of identity. The model assumes that somewhere inside you, waiting to be discovered, is a true self — a fixed set of preferences, values, and aptitudes that, if properly identified, will point toward the right career like a compass needle pointing north. The job of reflection is to strip away the noise and find the signal. The assumption is that the signal exists prior to the search.
Ibarra's research shows that it does not. The self you are trying to discover through introspection does not yet exist. It must be constructed — through action, through experimentation, through the lived experience of doing different work and being a different kind of professional. Identity is not a treasure buried in the yard of your psyche, waiting for the right shovel. It is a building that must be assembled from materials you have not yet gathered, using tools you have not yet acquired, according to a blueprint that will only become legible as the construction proceeds.
This is not an argument against reflection. Reflection matters. But Ibarra's consistent finding is that reflection is productive only when it has something to reflect on — and that something must be generated through action. The person who reflects before acting is reflecting on outdated data: memories of past experiences, assessments of past preferences, evaluations of past strengths. The data describes who she was. It does not describe who she could become. The gap between those two descriptions is the space where the actual career transition happens, and it cannot be bridged by analysis. It can only be bridged by experiment.
Ibarra published this argument in Working Identity in 2003, then extended it in Act Like a Leader, Think Like a Leader in 2015. In 2025, writing with Michael Jacobides in the Harvard Business Review, she applied the same logic to leadership in the AI age, arguing that "AI will not deliver value simply because firms spend money on tools and infrastructure. It will deliver value when leaders develop the new competencies needed to transform their firms and teams." The development she described was not a planning exercise. It was an experimental one — what she and Jacobides framed as five critical skills, each requiring leaders to act their way into new capabilities rather than analyzing their way into new strategies.
The parallel to individual career transition is exact. The leader who waits until she has a complete theory of AI's implications before changing how she leads will wait indefinitely, because the theory can only emerge from the experience of leading differently. The engineer who waits until he has a complete understanding of how AI changes his role before beginning to experiment with AI tools will remain stuck in the old identity, not because he lacks intelligence but because the intelligence is being directed at the wrong target — at planning instead of acting, at understanding instead of trying.
In the AI age, the failure of plan-then-act is amplified by the speed of change. Even if the model worked — even if it were possible to discover a true self through reflection and then build a career around it — the career landscape is shifting too fast for any static plan to remain valid. A professional who completed a thorough self-assessment in January 2025 and built a five-year career plan based on the results would have found that plan obsolete by March, when the capabilities of AI tools crossed a threshold that reorganized the value of every skill in the assessment.
This is not hyperbole. The technology described in The Orange Pill — Claude Code enabling a non-technical person to build working software through conversation, a twenty-fold productivity multiplier measured in a single week of training — invalidated assumptions about what skills are scarce, what expertise is valuable, and what career trajectories are viable at a pace that no planning model can accommodate. A plan built on the assumption that frontend development requires years of training is nullified by a tool that enables a backend engineer to build interfaces in days. A plan built on the assumption that technical depth is the highest-value career asset is complicated by a market that is beginning to reward integrative judgment more than any single technical skill.
The plan-then-act model requires a stable landscape. The landscape is not stable. It is not going to become stable. The tools are improving faster than the plans can be updated. And the people who are waiting for stability before they act — waiting for the dust to settle, waiting to see how things shake out, waiting for a clearer picture before they commit — are making a bet that Ibarra's research suggests will not pay off. The dust does not settle. It shifts. And the people who navigate shifting terrain most successfully are not the ones with the best maps but the ones who have developed the habit of testing the ground with each step.
Ibarra coined the term "outsight" to distinguish her approach from the conventional emphasis on insight. Insight is the knowledge that comes from looking inward — from reflection, introspection, self-analysis. Outsight is the knowledge that comes from looking outward — from trying new activities, interacting with new people, inhabiting new roles. Her research shows that outsight precedes insight in every successful career transition she has studied. The person first does something different. Then she understands something new about herself. The understanding follows the action. It does not precede it.
Applied to the AI transition, the outsight principle suggests a specific sequence. Do not begin by asking, "Who am I in a world of AI?" Begin by using AI tools. Build something. Attempt a project outside your domain. Experience what it feels like to direct a tool rather than execute manually. The experience will generate data that no amount of reflection could produce — data about what excites you, what bores you, what challenges feel invigorating versus what challenges feel merely tedious, what kind of professional you become when the constraints you have always worked within are suddenly removed.
This data is irreplaceable because it is experiential. Ibarra's research has identified a specific failure mode of excessive introspection: the person becomes trapped in what she calls "the old self's analysis of the new situation." The corporate lawyer who wonders whether she should become an entrepreneur cannot evaluate the question from inside her lawyer identity, because the lawyer identity has its own preferences, biases, and risk calculations. It will generate reasons to stay. Not dishonest reasons — genuine, well-reasoned arguments about the value of stability, the risk of change, the uncertainty of the new path. But the arguments are generated by the identity that is threatened by the change, and they are therefore systematically biased toward the status quo.
The only escape from this trap is action. Not dramatic, irrevocable action — not quitting the firm and founding a startup. Small action. Identity experiments. A weekend spent prototyping with Claude Code. A conversation with someone who has already made the transition. A side project that tests, in a low-stakes environment, whether the possible self has any traction in reality.
The discourse categories that The Orange Pill identifies — the triumphalists, the elegists, the Luddites, the silent middle — can be reread through Ibarra's framework as different relationships to the plan-then-act fallacy. The triumphalists have abandoned planning entirely and are acting with an intensity that borders on compulsion. The elegists are planning to preserve — analyzing the loss, developing elaborate arguments for why the old way was better, building intellectual structures to justify remaining where they are. The Luddites are planning to resist — constructing careful cases for why the new tools are inferior, dangerous, or morally compromising. Each group has mistaken its strategy for a position.
The silent middle — the largest and most important group — is doing something different, something that looks from the outside like indecision but that Ibarra's framework would recognize as precisely the right starting posture for a genuine identity transition. The silent middle is acting without a plan. Its members are using AI tools on Tuesday and feeling ambivalent about them on Wednesday. They are building something new in the morning and mourning something old in the evening. They are experimenting without a thesis, trying without committing, inhabiting provisional identities without declaring that the provisional identity is the final one.
This posture — active, experimental, uncommitted — is what Ibarra's research identifies as the hallmark of successful transition. It looks messy because it is messy. It looks indecisive because the decision is not yet ripe. The silent middle is not confused. It is doing the work that the discourse — with its demand for clean narratives and firm positions — cannot accommodate.
Ibarra herself has increasingly applied this framework to the AI moment. Speaking at Davos and writing in the Harvard Business Review, she has argued that the leaders who will succeed in the AI age are not the ones with the best AI strategy but the ones who model personal experimentation — who use AI tools visibly, who share what they learn, who demonstrate through their own behavior that not knowing is a legitimate and temporary condition rather than a permanent disqualification. The leader who experiments in public gives permission to the organization to experiment. The leader who demands a strategy before allowing experimentation guarantees that the organization will be stuck in plan-then-act at precisely the moment when action is the only source of the information the strategy requires.
The conventional wisdom says: decide who you want to be, then become it. Ibarra says: become, provisionally, several possible versions of yourself, and let the becoming produce the knowing. In a world where the ground shifts faster than any plan can track, the second approach is not just more effective. It is the only one that works.
The question that follows — and it is the question that the next chapter addresses — is what happens when the becoming happens at the speed of a conversation with an AI tool, when the interval between one identity experiment and the next shrinks from months to hours, and when the pace of trying outpaces the capacity to learn from what was tried.
An identity experiment, in Ibarra's framework, has a specific structure. It is not a thought experiment. It is not a fantasy about a possible future. It is a concrete, embodied foray into a provisional self — an action taken in the real world that generates real information about whether a possible identity fits. The person does not merely imagine being a different kind of professional. She briefly becomes one, in a limited and reversible way, and the becoming generates data that no amount of imagining could produce.
The traditional identity experiment is small, deliberate, and slow. A management consultant who wonders whether she might be a better fit for the nonprofit sector does not quit McKinsey and apply to the Red Cross. She volunteers on weekends. She attends a conference. She takes a pro bono engagement that exposes her to the work and the people and the rhythms of a different professional world. Each foray is a data point. Does this feel right? Does this activate something that the consulting work does not? Am I energized by this, or am I merely attracted to the idea of being energized by it?
The slowness is not incidental. It is functionally necessary. Each experiment generates a complex signal — not a simple yes-or-no but a nuanced, multidimensional response that includes cognitive elements (Is this interesting? Am I good at it?), emotional elements (Does this feel like me? Am I anxious or excited?), and social elements (Do these people feel like my people? Can I see myself in this community?). Processing this signal takes time. Ibarra's research shows that the most valuable reflection often happens between experiments, in the quiet intervals when the person is not trying anything new but is digesting what the last experiment revealed. The interval between experiments is where the identity data gets integrated into the working self.
AI has collapsed the interval.
Consider the designer at Napster who, within two weeks of working with Claude Code, was building complete features end to end — not designing them and handing them to an engineer, but implementing them, deploying them, watching them work. Each feature was an identity experiment. Not consciously, not deliberately, but structurally: each one tested a possible self ("I am someone who builds, not just designs") and generated data about whether that self was viable.
In the traditional timeline, a designer who wanted to test the possible self of "builder" would need months — to learn a programming language, to practice with frameworks, to accumulate enough skill to produce something functional. Each month of learning was also a month of identity processing. The identity developed in tandem with the skill, at the same pace, through the same friction. By the time the designer could actually build a complete feature, she had also developed a refined sense of whether "builder" was an identity she wanted to inhabit — because the months of effort had given her hundreds of small data points about her relationship to the work.
The designer at Napster ran the experiment in days. The output was real — functional code, working features, deployed products. The data about capability was unambiguous: she could do this. But the data about identity was far less clear, because the experiment had been compressed into a timeframe that did not allow for the slow processing of the signal. She knew she could build. She did not yet know whether she wanted to be a builder — whether the identity of builder would sustain her through the inevitable difficulties that follow the initial exhilaration, whether it would feel authentic six months later or whether it would feel like a role she had been cast in by the tool's capabilities rather than chosen from a genuine sense of self.
This distinction — between knowing you can and knowing you want to — is the crux of what Ibarra's framework reveals about identity experiments at the speed of inference. The can question is answered by the tool. The want question is answered only by the person, and it requires a kind of processing that operates on a fundamentally different timeline than the tool's output.
The speed of AI-enabled experimentation creates what might be called a data-integration mismatch. The experiments generate identity data faster than the human identity system can absorb it. Each experiment produces a signal. The signals accumulate. But the integration mechanism — the slow, reflective, often unconscious process by which a person incorporates new experiences into a coherent sense of self — cannot accelerate to match the pace of the input. The result is a buffer overflow: more identity data than the system can process, more possible selves in play than the reflective capacity can evaluate, more experiences than can be woven into a single coherent narrative.
Ibarra's earlier research did not need to address this problem, because the pace of traditional identity experiments naturally regulated the flow. The friction of acquiring a new skill — the months of study, the setbacks, the slow accumulation of competence — served as a natural throttle on experimentation. You could not try on possible selves faster than you could develop the capabilities to test them. The throttle ensured that each experiment was preceded and followed by sufficient time for reflection, evaluation, and the gradual adjustment of the working self.
AI has removed the throttle. The capabilities arrive instantly. The experiments can be run back to back without pause. And the question becomes: what happens to identity development when the pacing mechanism that regulated it for all of prior professional history is suddenly gone?
The answer, extrapolated from Ibarra's framework and observable in the behavior patterns documented in The Orange Pill, has two faces.
The first face is exhilarating. The removal of the capability throttle means that possible selves previously gated by years of training are now accessible to anyone with access to the tool. A junior developer ships in a weekend what her senior colleague quoted six months for. A non-technical founder prototypes a product over a weekend. An engineer in Trivandrum discovers she can build user interfaces. Each of these is an identity experiment that would have been impossible — or prohibitively expensive in time and effort — without AI. The space of testable possible selves has expanded from a narrow corridor to an open field, and the people who benefit most are those who were previously excluded from the corridor entirely: the developers in Lagos and Dhaka, the career changers without capital, the professionals whose ideas were trapped behind barriers of implementation cost.
The second face is diagnostic, and it is where Ibarra's framework does its most important work. When experiments are easy to run, the incentive shifts from depth to breadth. The professional who can test a new possible self every week has a natural tendency to keep testing — to sample the next identity before fully processing the last one. The tendency is reinforced by the tool's responsiveness (the next experiment is always a conversation away), by the cultural emphasis on output and iteration (the discourse rewards those who ship, not those who reflect), and by the dopamine architecture of small wins (each successful experiment delivers a burst of confirmation that feels like progress).
Ibarra's research on rapid transitions provides a caution here. In cases where professionals changed careers quickly — driven by opportunity, necessity, or the kind of intense, compressed timeframe that mirrors the AI-training context — the transitions that succeeded were not the fastest ones. They were the ones where the person paused long enough, at key junctures, to evaluate whether the direction was right. The pause did not need to be long. It needed to be genuine — a moment of honest assessment rather than a brief interruption before the next sprint. The transitions that failed were the ones where speed substituted for evaluation, where the person moved so quickly from one possible self to the next that none of them had time to develop roots.
The concept of "small wins," which Ibarra draws from Karl Weick's organizational theory, illuminates the dynamic. Small wins are visible, concrete accomplishments that confirm the viability of a direction. Each feature the Napster designer built was a small win. Each product the solo developer shipped was a small win. Small wins are the fuel of identity transition — they provide the evidence that a new self is real, that the possible self has survived contact with reality.
AI is the most powerful small-win generator in the history of professional development. The collapse of the imagination-to-artifact ratio means that the gap between conceiving a new self and producing evidence of that self has nearly vanished. The evidence arrives immediately, concretely, indisputably. You built a working product. You designed a system. You analyzed a dataset. The wins are real.
But Ibarra's research reveals that small wins have a paradoxical property: too many of them, too quickly, can produce a form of confidence that is structurally hollow. The person has evidence that they can operate in the new identity. They do not have the deeper confidence that comes from having tested the identity against difficulty — against the moments when the work is tedious rather than thrilling, when the output fails rather than succeeds, when the provisional self is challenged by reality rather than confirmed by it.
Genuine identity development requires not just wins but losses — not catastrophic losses, but the small, instructive failures that teach a person where the limits of a possible self actually lie. The geological metaphor from The Orange Pill applies: each hour of debugging, each failed approach, each moment of genuine struggle deposits a thin layer of understanding. The layers are laid down slowly, and their value is invisible in any single session. But over months and years, they accumulate into something a person can stand on — a foundation of self-knowledge that no small win, however concrete, can substitute for.
Ibarra's research on professionals who transitioned into roles that appeared to be a natural fit — where the early signals were uniformly positive, where every experiment succeeded — reveals a disturbing pattern. These professionals were often the most likely to experience identity crises later, when the inevitable difficulties of the new role arrived and they discovered that their identity was built on a foundation of easy wins rather than tested resilience. The wins had confirmed the identity but had not stress-tested it. The first genuine setback in the new role produced a crisis disproportionate to its severity, because the identity had no practice absorbing difficulty.
The implications for AI-era identity development are direct. The tool's capacity to produce instant, reliable small wins is a gift. It accelerates experimentation, democratizes access, and generates the evidence that possible selves are viable. But the same capacity can produce an identity built on a foundation of frictionless success — an identity that has never been tested by the specific, uncomfortable, developmental experience of being stuck.
What Ibarra's framework prescribes is not a reduction in the speed of experimentation but a deliberate investment in what she calls the reflective infrastructure of transition. This infrastructure includes intentional pauses between experiments — not long pauses, but genuine ones, where the question is not "What should I try next?" but "What did the last experiment teach me about who I am becoming?" It includes trusted interlocutors — people who will ask the hard questions that Claude will not, who will challenge the provisional identity rather than confirming it, who will say "Are you sure?" when the tool says "Here you go." It includes return visits to possible selves that showed early promise — the discipline of going back to the same experiment rather than moving on to the next one, of choosing depth over breadth when every incentive in the environment rewards breadth.
The speed of AI-enabled identity experimentation is not the problem. The absence of structures to match that speed with integration is the problem. And the structures, unlike the experiments, cannot be automated. They require human judgment, human relationships, and human time — the resources that become most scarce precisely when the tool makes everything else abundant.
Ibarra's framework does not resolve this tension. It names it with precision. The experiments are faster. The integration is not. The gap between them is where identity development either happens or fails to happen, and no tool, no matter how capable, can close it on your behalf.
The anthropologist Victor Turner borrowed a word from Arnold van Gennep to describe the middle phase of a rite of passage — the phase between separation from the old identity and incorporation into the new one. The word was "liminal," from the Latin limen, meaning threshold. The person in liminal space is standing on the threshold. She has left one room but has not entered the next. She is betwixt and between, belonging fully to neither the world she departed nor the world she has not yet reached.
Turner studied liminality in tribal initiation rites — ceremonies where adolescents were separated from their childhood identities, subjected to trials and ordeals in a period of structured ambiguity, and then reincorporated into the community as adults. The liminal period was not an accident of ritual design. It was the mechanism of transformation. The adolescent could not become an adult without passing through a phase where he was neither — a phase characterized by confusion, vulnerability, the dissolution of the old self's certainties, and the gradual, often painful emergence of something new.
Ibarra recognized the same structure in professional career transitions, stripped of its ceremonial trappings but operating by the same psychological logic. The corporate lawyer who is leaving law but has not yet become an entrepreneur is in liminal space. The academic who has resigned her professorship but has not yet found her footing in the private sector is in liminal space. The engineer whose implementation work has been absorbed by AI but whose new role has not yet crystallized is in liminal space. In every case, the person experiences the same constellation of symptoms: anxiety, loss of confidence, a destabilizing uncertainty about who she is, a painful awareness that the old answers no longer apply and the new ones have not yet formed.
The builders described in The Orange Pill who experience what Edo Segal calls vertigo — the ground moving under their feet while the view gets better — are experiencing liminality. The metaphor is almost too precise: vertigo is the sensation of being between stable positions, of having lost the ground without having found the next footing. The view may be better. The sensation is nauseating. And the instinct, which every person in liminal space feels, is to end the discomfort as quickly as possible — either by rushing forward into premature commitment or by retreating backward into the familiar.
Ibarra's research shows that both responses are pathological. The person who rushes forward — who seizes the first available new identity and commits to it before the liminal process has run its course — typically discovers, months or years later, that the new identity does not fit. She chose it not because it was right but because it was available, and because the discomfort of not choosing was intolerable. The investment banker who becomes a yoga teacher in the first flush of career dissatisfaction, without testing whether the identity of yoga teacher can sustain the parts of her that the banking satisfied, is a familiar archetype in Ibarra's case studies. The transition looks decisive. The underlying work has not been done.
The person who retreats — who returns to the old identity because the liminal discomfort is too great — forecloses the possibility of genuine development. The return feels like relief. It is not relief. It is the substitution of a known constraint for an unknown possibility. The old identity is comfortable precisely because it is familiar, not because it is right. And the person who retreats will find, often within months, that the dissatisfaction that prompted the transition has not disappeared. It has been suppressed. It will return.
The fight-or-flight dichotomy that The Orange Pill observes in the response to AI maps onto Ibarra's framework with startling precision. The professionals running for the hills — senior engineers moving to lower their cost of living, developers questioning whether their profession has a future — are in flight from liminality. The professionals leaning in compulsively — working through the night, unable to stop building, filling every available hour with AI-assisted production — are fighting their way through liminality by converting it into action. Both responses are attempts to resolve the discomfort of the threshold. Neither succeeds, because the threshold is not a problem to be solved. It is a condition to be inhabited, for long enough that the new identity can form.
This is the hardest thing about liminality, and the thing that Ibarra's research documents most unflinchingly: the discomfort is not a side effect of the transition. It is the transition. The confusion, the loss of confidence, the inability to answer the question "What do you do?" with the clarity that used to come automatically — these are not obstacles to identity development. They are the medium through which identity development occurs. The adolescent in Turner's tribal rite does not become an adult despite the ordeal. He becomes an adult through it. The professional in career transition does not develop a new identity despite the confusion. She develops it through the confusion, because the confusion is what happens when the old self's certainties dissolve and the new self's certainties have not yet precipitated.
The AI age has produced a particular variant of liminality that Ibarra's earlier work did not anticipate, because the technology that triggers the transition also reshapes the experience of being in it. The engineer in liminal space — no longer the person who writes the code, not yet the person who directs the AI that writes it — has access to a tool that offers the seductive possibility of skipping the discomfort entirely. Claude Code does not care whether the person using it has resolved an identity crisis. It produces output regardless. The engineer can continue to produce, to ship, to demonstrate competence, even while the deeper question of who she is remains unresolved. The output masks the confusion. The productivity metrics look normal, even excellent. But the identity work is not happening, because the work requires the specific discomfort that the tool's productivity makes it possible to avoid.
Ibarra's research on professionals who maintained high performance during career transitions while failing to develop new identities reveals a pattern that is directly relevant. These were people who continued to produce at a high level in the new role but who never developed the sense of ownership, the felt authenticity, the deep alignment between self and work that characterizes a successful transition. They were performing the new identity without inhabiting it. The performance was convincing to others. It was not convincing to themselves. And the gap between external performance and internal experience produced, over time, a specific kind of exhaustion — not the exhaustion of overwork but the exhaustion of inauthenticity, the depletion that comes from performing a self you have not become.
The Berkeley study that The Orange Pill describes — documenting intensification, task seepage, the colonization of pauses by AI-assisted work — may be measuring this phenomenon at scale. The workers who reported higher output alongside lower satisfaction may not have been experiencing a simple work-life balance problem. They may have been experiencing the specific fatigue of liminality masked by productivity — the condition of being between identities while the tool makes it possible to avoid confronting the between-ness.
Ibarra identifies several conditions that support productive liminality — conditions that allow the person to tolerate the discomfort long enough for genuine development to occur rather than short-circuiting the process through premature commitment or retreat.
The first is what she calls transitional relationships — connections with people who are themselves in transition, or who have recently completed one. These relationships are valuable not because they provide advice (most advice during liminality is useless, because it is generated from the perspective of a settled identity) but because they normalize the experience. The person in liminal space who discovers that others are experiencing the same confusion, the same loss of certainty, the same nauseating vertigo, gains something that no amount of AI-assisted productivity can provide: the recognition that the discomfort is not a personal failure but a structural feature of the process.
The second condition is what Ibarra calls identity workspaces — environments in which it is safe to be between identities, to not know the answer to "What do you do?", to experiment with provisional selves without being held to the standards of a committed identity. The Trivandrum training, viewed through Ibarra's framework, was an identity workspace. Not because it was designed as one — it was designed as a skills training — but because the conditions of the week created a context in which twenty engineers were simultaneously given permission to be beginners, to not know what they were doing, to try things that might fail, in a shared environment where failure was expected and competence was not yet required. The psychological safety of the group — everyone was equally disoriented, equally new to the tool, equally uncertain about what the experience meant for their professional identity — created the conditions under which liminal exploration could happen without the social penalty that normally accompanies visible incompetence.
The third condition is temporal tolerance — the willingness to remain in liminal space without demanding that it resolve on a specific timeline. Ibarra's research shows that the duration of productive liminality varies enormously across individuals, from months to years, and that attempts to compress the timeline — whether by the individual or by the organization — typically produce premature closure rather than genuine development. The organization that demands its employees "complete" their AI transition by Q3 is imposing a timeline on a process that does not respect timelines. The result will be apparent compliance and deep resistance — people who use the tools while remaining psychologically committed to the old identity, who perform the new role without inhabiting it.
The most successful transitions in Ibarra's studies were characterized by what she describes as a gradual shift in the center of gravity of identity. The old self does not disappear overnight. The new self does not arrive fully formed. Instead, the person inhabits both simultaneously, allocating progressively more weight to the new identity as it develops strength, allowing the old identity to recede gradually rather than abandoning it all at once. The process is more like a slow dissolve in film than a hard cut — both images are visible at the same time, and the transition happens so gradually that the viewer cannot identify the precise moment when one replaced the other.
This dissolve requires tolerance for ambiguity — the capacity to hold two identities in play without insisting that one of them prevail. The tolerance is difficult under normal circumstances. It is especially difficult in the AI age, for a reason that connects liminality to the competency trap that the next chapter examines: the professionals with the deepest investment in the old identity are the ones with the least tolerance for the ambiguity that transition requires, because their identity was built through decades of progressive certainty, and certainty is the opposite of what liminal space demands.
The silent middle — the people who feel both the exhilaration and the loss, who use AI on Tuesday and mourn on Wednesday, who cannot articulate a clean narrative about where they are headed — are not confused. They are in liminal space. They are doing the difficult, necessary, uncomfortable work of standing on the threshold, and the absence of a clean narrative is not a failure of communication. It is a feature of the process. The clean narrative comes later, after the crossing. During the crossing, the only honest narrative is the one that admits: this is hard, this is disorienting, and the resolution has not arrived yet.
Ibarra's framework suggests that the most important thing anyone can do for people in this position is not to offer answers but to create conditions in which the questions can be tolerated. Not to accelerate the transition but to protect the space in which the transition is occurring. Not to demand clarity but to demonstrate that the absence of clarity is survivable.
The threshold is not a comfortable place to stand. But standing on it, for as long as the standing requires, is the only way to reach the other side.
Karl Weick argued in 1984 that the most effective strategy for addressing overwhelming social problems was not to attack them at scale but to recast them as a series of small, concrete, achievable wins. The grand strategy paralyzes. The small win mobilizes. Each win is modest enough to be achievable, concrete enough to be visible, and significant enough to build momentum for the next one. The accumulation of small wins produces, over time, a transformation that no single grand initiative could have achieved — because the wins build on each other, each one shifting the conditions slightly in favor of the next.
Ibarra adopted Weick's concept and applied it to identity transition. In her framework, small wins are not merely task accomplishments. They are identity evidence — visible, concrete demonstrations that a possible self is viable. Each small win says, implicitly but powerfully: you can be this person. The management consultant who volunteers at a nonprofit over a weekend and successfully runs a strategic planning session has not merely completed a task. She has produced evidence that the possible self of "nonprofit leader" can survive contact with reality. The evidence is more persuasive than any self-assessment, more compelling than any career counselor's recommendation, because it is experiential. She did not think about being a nonprofit leader. She was one, briefly, and the being generated knowledge that no amount of thinking could produce.
AI is the most powerful small-win generator in the history of professional development. This claim is not hyperbolic. It follows directly from the collapse of the imagination-to-artifact ratio that The Orange Pill describes. When the distance between conceiving a new professional self and producing evidence of that self shrinks to the length of a conversation, the small wins multiply at a rate that no previous professional context has made possible.
A junior developer ships in a weekend what her senior colleague quoted six months for. A non-technical founder prototypes a revenue-generating product without writing a line of code by hand. A backend engineer builds a complete user-facing feature in two days. Each of these is a small win, and each carries the same implicit identity message: you can be this person. The evidence is tangible — working code, functional interfaces, deployed products. The evidence is immediate — not separated from the experiment by months of study but arriving within hours of the attempt. And the evidence is public — visible to colleagues, to managers, to the professional community in which the person's identity is negotiated.
The acceleration of small wins has a genuinely democratizing dimension that Ibarra's earlier work could not have anticipated. Before AI, the distribution of small wins was gated by access — to training, to capital, to institutional support, to the networks in which professional experiments could be conducted. The developer in Lagos, the career changer without savings, the professional in a peripheral geography far from the centers of innovation — all faced barriers to experimentation that had nothing to do with their talent or motivation and everything to do with the infrastructure required to translate ambition into evidence. AI has not eliminated these barriers, but it has lowered them enough that the population of people who can produce small wins has expanded dramatically. The possible self that was once testable only by the privileged is now testable by anyone with access to the tool.
This expansion matters for identity development in a way that goes beyond the immediate productivity story. Ibarra's research shows that small wins have a compounding effect on identity. Each win does not merely confirm a possible self in isolation. It shifts the person's baseline expectation of what they can attempt. The backend engineer who builds one interface is more likely to attempt a second, more complex one. The success raises the aspiration. The raised aspiration produces a more ambitious experiment. The more ambitious experiment, if successful, produces a stronger identity signal. The cycle accelerates.
In environments where small wins are abundant, this compounding can produce remarkably rapid identity shifts. Ibarra documented cases where professionals who had been stuck in old identities for years suddenly underwent rapid transformation when the right confluence of circumstances — a new role, a new network, a supportive environment — made small wins easy to accumulate. The acceleration was not linear. It was exponential, each win creating the conditions for the next, each success widening the space of what felt possible.
AI produces this confluence for millions of professionals simultaneously. The tool provides the capability. The immediate feedback provides the confirmation. The visibility of the output provides the social validation. All three conditions for compounding small wins — access, feedback, recognition — are satisfied by the same technology, at the same time, for a population of unprecedented size.
But Ibarra's framework, characteristically, carries the diagnosis alongside the celebration. The same research that documents the power of small wins also documents their most dangerous failure mode: the substitution of wins for development.
Small wins confirm capability. They do not, by themselves, produce identity. The distinction is subtle but consequential. A person who accumulates a hundred small wins in a hundred different domains has demonstrated that she can operate across those domains. She has not developed the depth of engagement in any single domain that would constitute a settled professional identity. Each win is genuine. The aggregate is hollow — not because the individual wins lack substance but because the pattern lacks coherence. The wins do not build on each other in a way that produces a recognizable professional trajectory. They accumulate without compounding.
Ibarra observed this pattern in professionals she describes as "serial explorers" — people who moved from one identity experiment to the next with genuine enthusiasm and impressive accomplishments but who, years into the process, had not developed a stable working identity. Each experiment was productive. Each produced real output. But the experiments did not converge on an identity because the person was not selecting among possible selves. She was collecting them.
The AI environment is structured to reward collection over selection. The tool makes it easy to try. It makes it easy to succeed. It makes it easy to move on to the next experiment before the current one has been fully processed. The feedback loop — attempt, succeed, attempt again — operates at a speed that the reflective infrastructure cannot match. Each small win generates momentum, and the momentum carries the person forward before she has decided whether forward is the right direction.
Ibarra's research on the integration of small wins identifies a critical mediating process: narration. Small wins become identity-building only when they are woven into a story — a coherent account of who the person is becoming and how the current experiment connects to the ones that preceded it. The management consultant who runs a successful nonprofit workshop integrates the win when she can say, not just "I did this" but "This is part of a pattern — I have always been drawn to strategic work, and the nonprofit sector gives me a way to apply that instinct in a context that aligns with my values." The narration connects the win to a trajectory. The trajectory gives the win its identity meaning.
Without narration, wins accumulate without integration. The person can list everything she has done. She cannot explain why it matters, or what it reveals about who she is becoming, or how the current direction differs from the ones she has abandoned. The list is impressive. The story is missing.
AI complicates the narration process in a way that is diagnostically important. The tool can help build the narrative. Claude can assist in articulating a career story, connecting disparate experiences into a coherent arc, identifying patterns that the person herself has not noticed. This assistance is genuinely valuable — many professionals struggle to narrate their own development, and the tool's pattern-recognition capabilities can surface connections that would otherwise remain invisible.
But the narrative that AI helps construct is only as authentic as the self-knowledge that informs it. A compelling career story that does not reflect genuine identity development is not a foundation for a working identity. It is a performance. Ibarra's research on identity narratives distinguishes between stories that reflect genuine integration and stories that are constructed to appear coherent without the underlying developmental work. The first kind withstands pressure — the person can elaborate, can answer challenging questions, can adapt the narrative to new audiences without losing its essential truth. The second kind is brittle — it sounds good in a prepared setting but cracks when probed, because the narrative is draped over the experiences rather than growing from them.
The abundance of AI-generated small wins demands a corresponding investment in the slow, unglamorous work of making meaning from them. Not every win needs to be narrated. Not every experiment needs to be processed to exhaustion. But the ratio of winning to meaning-making must not become so lopsided that the wins outpace the person's capacity to understand what they signify. The developer who ships ten products in ten weeks has produced impressive output. Whether she has developed a working identity depends on whether she can explain — to herself, first, and to others, eventually — why she built what she built, what the pattern reveals about what she cares about, and where the trajectory is heading.
Ibarra's framework does not discourage winning. It insists that winning without meaning is activity, not development. The distinction matters more now than it ever has, because the tool has made winning so easy that the absence of meaning is no longer a theoretical concern. It is the lived experience of a growing population of professionals who are producing more than they have ever produced and understanding less about themselves than they did before they started.
Dorothy Leonard-Barton published a paper in 1992 that identified a phenomenon she called "core rigidities" — the tendency of an organization's greatest strengths to become, over time, its most intractable weaknesses. The capabilities that produced competitive advantage in one era become the obstacles to adaptation in the next, precisely because the organization has invested so heavily in them that abandoning them feels like self-destruction rather than evolution.
Ibarra recognized the same dynamic operating at the individual level and called it the competency trap. The concept is deceptively simple: the better you are at something, the harder it is to stop doing it, even when stopping would serve you. The trap is not cognitive. You can understand, intellectually, that the world has changed and that your skills must change with it. The trap is emotional. The competence is woven into your identity so tightly that surrendering it feels like surrendering yourself.
The senior engineer in Trivandrum embodied the competency trap with a precision that Ibarra's framework would predict down to the timeline of his oscillation. Two decades of backend expertise. Thousands of hours of the specific, patient, failure-rich practice that deposits understanding layer by layer into the body of knowledge that constitutes architectural intuition. Colleagues who sought his judgment. Junior engineers who modeled their careers on his trajectory. A professional reputation built on the particular kind of depth that only comes from sustained, difficult, unglamorous work in a single domain.
Then, in the space of a training week, a tool demonstrated that it could produce working implementations of the kind of problems he had spent his career solving. Not all problems. Not the most architecturally complex ones. But enough of the daily work — the dependency management, the configuration, the connective tissue — that the proportion of his working life devoted to the craft of implementation dropped from eighty percent to something dramatically lower.
His oscillation between excitement and terror was not indecision. It was the competency trap in real time — the simultaneous recognition that the tool was extraordinary (excitement) and that its extraordinariness threatened the foundation of his professional identity (terror). The excitement was about capability. The terror was about self.
Ibarra's research on the competency trap reveals a counterintuitive dynamic: the professionals who resist AI adoption most strongly are often the ones who would benefit from it most. The senior engineer's architectural judgment — his ability to evaluate whether a system design would hold under load, whether a proposed solution would create downstream problems, whether an elegant approach was also a fragile one — was the kind of expertise that becomes more valuable, not less, when implementation is automated. The judgment layer that The Orange Pill describes as the "remaining twenty percent" was, in objective terms, the most strategically important part of his contribution to the organization.
But the engineer could not see his own judgment as valuable in isolation from the implementation practice through which it was expressed. The judgment had always been embedded in the doing. He exercised architectural intuition while writing code, while debugging, while navigating the specific resistances of a system that did not behave as expected. The judgment and the practice were fused. Separating them felt not like an elevation but like an amputation.
This fusion of expertise with the practice through which it is expressed is the mechanism of the competency trap. The framework knitters of Nottinghamshire, whom The Orange Pill examines through the lens of economic and technological history, can be re-examined through Ibarra's identity framework with additional diagnostic precision. The knitters' expertise was not abstract knowledge about textiles. It was embodied knowledge — understanding that lived in their hands, in the specific muscular intelligence of manipulating thread under tension, in the tactile feedback that told them when the gauge was right without looking. The expertise and the embodiment were inseparable. When the power loom eliminated the need for the embodiment, the expertise did not transfer cleanly to a new context, because the expertise had never existed outside the practice that generated it.
The same structural dynamic applies to the senior engineer, translated into cognitive rather than manual terms. His architectural intuition was not a separable module that could be detached from the practice of implementation and applied independently to AI-generated output. It was generated by the practice of implementation — by the thousands of hours of encountering unexpected behavior, of debugging failures that taught him how systems actually work as opposed to how they are supposed to work, of developing the specific kind of understanding that only comes from having your hands in the machinery.
Ibarra would ask: Can this intuition survive the loss of the practice that produced it? The answer is not obvious. Ibarra's research on career changers who left one domain for another suggests that expertise does transfer, but not automatically and not completely. The transferable component is what she calls the "deep structure" of the expertise — the pattern-recognition capacities, the evaluative frameworks, the ability to distinguish signal from noise in complex information environments. The non-transferable component is the domain-specific fluency — the particular vocabulary, the specific reference cases, the embodied familiarity with the tools and materials of the craft.
The transition requires the expert to identify which components of his expertise belong to the deep structure and which belong to the domain-specific fluency — and then to invest in developing new domain-specific fluency while trusting that the deep structure will transfer. This is an act of faith as much as an act of skill, because the expert cannot know in advance whether the transfer will work. He can only discover it by trying, and the trying requires him to occupy a space of incompetence that his entire career has been organized to avoid.
Here Ibarra's framework intersects with a deeper observation about what expertise does to the capacity for learning. Experts learn differently from novices. Specifically, experts learn worse in new domains than novices do, because their expertise generates expectations that interfere with the intake of genuinely new information. The senior engineer approaching a new tool does not approach it with the fresh, unstructured attention of a beginner. He approaches it with twenty years of assumptions about how systems work, what matters, where the risks lie, and what constitutes quality. Some of these assumptions transfer productively. Others are invisible constraints that prevent him from seeing what the new tool actually does as opposed to what his mental model predicts it should do.
This is why the junior developer in the Trivandrum training, the one who shipped in a weekend what the senior colleague quoted six months for, experienced the tool so differently from the senior engineer. She had fewer assumptions to override. Her fishbowl, to use The Orange Pill's metaphor, was smaller and more recently constructed, which meant the cracks were easier to see through and the water was less habituated. The senior engineer's fishbowl was reinforced by twenty years of successful practice. The glass was thick, the water warm, and the cracks, when they appeared, were more threatening because the structure they compromised was larger.
Ibarra's prescription for the competency trap is not to dismantle the old identity but to build alongside it. Her research shows that the most successful transitions from deep expertise to new domains are not abrupt substitutions — expert identity one day, beginner identity the next — but gradual expansions. The expert adds a new identity alongside the existing one. She becomes an expert who is also a beginner. The dual identity is uncomfortable. It is also transitional — a bridge between the old self and the new.
The practical form this takes is the identity experiment conducted from a position of acknowledged expertise. The senior engineer does not pretend to be a beginner. He does not abandon his identity as an architect. He adds, tentatively and experimentally, the identity of someone who directs AI. He uses his architectural judgment to evaluate the AI's output. He discovers, through the evaluation, where his judgment adds value and where it merely adds friction. He learns, gradually and through practice rather than instruction, which components of his expertise transfer to the new context and which are artifacts of a practice that is no longer necessary.
The process is slower than the organization might wish. It is slower than the technology would permit. It requires patience from the individual, from the team, from the leadership that is managing the transition. But Ibarra's research is unequivocal on this point: the speed of identity transition cannot be set by the speed of technological change. It is set by the human capacity for self-revision, and that capacity, while real and substantial, operates on its own timeline.
The organizations that succeed in navigating the AI transition will be the ones that recognize the competency trap for what it is — not a skills deficit but an identity crisis — and that create the conditions under which experts can add new identities without abandoning the ones that gave their careers meaning. The conditions include psychological safety (permission to be incompetent without penalty), temporal tolerance (acceptance that the transition takes as long as it takes), and visible modeling (leaders who demonstrate their own willingness to be beginners, who use AI tools publicly and share their struggles openly, who show that the discomfort of the transition is universal rather than a personal failing).
Ibarra's competency trap explains what The Orange Pill's Luddite analysis describes. The framework knitters did not lack intelligence. The senior engineers do not lack adaptability. What they lack — what the trap constrains — is the permission, internal and external, to be temporarily less than what they have spent their lives becoming. The trap is sprung not by inability but by investment. The more you have put into being who you are, the more the prospect of being someone else costs you. And the cost is not measured in skills or salary. It is measured in self.
Every professional operates inside a set of assumptions so familiar they have become invisible. These assumptions define not just what the person knows but what the person considers worth knowing, not just what she can do but what she considers worth doing. They are the water she breathes. They constitute, in the terminology of The Orange Pill, a fishbowl — the enclosure of perception that shapes everything she sees without itself being seen.
Ibarra's contribution is to recognize that the fishbowl is not merely cognitive. It is identitarian. The assumptions are not just beliefs about the world. They are beliefs about the self — about who you are, what you are capable of, and what kinds of professional activities are consistent with your identity. The backend engineer's fishbowl is not just a set of technical assumptions about how systems work. It is a set of identity assumptions about who she is: a systems person, not a design person. A builder of infrastructure, not a builder of interfaces. Someone who works in the back of the house, not the front.
These identity assumptions are maintained by what psychologists call self-verification processes — the tendency to seek out experiences that confirm your existing self-concept and to avoid or reinterpret experiences that challenge it. The backend engineer does not merely happen to avoid frontend work. She avoids it because it is inconsistent with her identity, and engaging with it would produce the uncomfortable sensation of being in the wrong place, doing the wrong thing, being the wrong person. The avoidance feels like preference. It is actually identity maintenance.
The fishbowl cracks when something makes the identity assumptions visible — when an experience so clearly contradicts the self-concept that the self-concept must either expand or retreat. The backend engineer who, with Claude Code, builds a complete user-facing feature has cracked her fishbowl. The assumption that she is not a frontend person has collided with the evidence that she is, or at least can be. The crack does not destroy the fishbowl. It introduces a fissure through which something outside becomes visible — a possible self that was previously obscured by the glass.
Ibarra calls the identity that emerges through the crack a provisional identity — a temporary self, tried on for fit, inhabited experimentally rather than permanently. The provisional identity is not a commitment. It is a hypothesis. The backend engineer who builds an interface is not declaring herself a frontend developer. She is testing the proposition: What if I were someone who builds across the stack? What would that feel like? What would that require of me? Would I recognize myself in that role?
The provisional nature of the identity is essential. Ibarra's research shows that the most successful career changers hold multiple provisional identities in play simultaneously — testing several possible selves without prematurely committing to any single one. The premature commitment collapses the experimental process. The person who declares "I am now a full-stack developer" after building one feature has closed off the exploratory space that Ibarra's framework identifies as essential to genuine identity development. She has chosen before the evidence is sufficient. The commitment feels decisive. It is actually defensive — a way of ending the discomfort of uncertainty by imposing a premature resolution.
The tolerance for multiplicity — the ability to hold several provisional identities in play at once, without demanding that one of them win — is the signature capacity of successful career changers in Ibarra's studies. It is also the capacity most severely tested by the AI environment, for reasons that connect to everything the previous chapters have established about speed, small wins, and the competency trap.
AI makes it extraordinarily easy to crack the fishbowl. The backend engineer does not need to study frontend development for months before discovering that her assumptions about herself were too narrow. She can discover it in an afternoon, through the immediate, concrete experience of building something she assumed she could not build. The crack happens fast. The question is what happens after.
In Ibarra's framework, two things need to happen after the crack. First, the person needs to explore the crack — to look through it, to engage with what she sees, to try on the provisional identity that the crack has made visible. This is the identity experiment, and AI accelerates it to a speed that is genuinely enabling. The person can explore faster, produce evidence faster, test the viability of the provisional self faster than at any previous point in professional history.
Second — and this is where the framework becomes most demanding — the person needs to resist the urge to seal the crack. The fishbowl's glass is resilient. It repairs itself. The self-verification processes that maintained the old identity do not disappear because a single experience contradicted them. They push back, reinterpreting the evidence, minimizing the significance of the new experience, generating reasons why the provisional identity is not really viable. The engineer who built an interface tells herself it was a fluke, that real frontend work is harder, that she got lucky, that the AI did the real work. The glass repairs itself. The crack closes. The fishbowl is restored, and the possible self on the other side becomes invisible again.
Ibarra's research identifies the factors that determine whether a crack remains open long enough for genuine exploration. The most important is repetition — the person returns to the provisional identity multiple times, in different contexts, with different levels of challenge. Each return produces new data, and the accumulation of data makes the self-verification processes increasingly difficult to maintain. One interface built is a fluke. Five interfaces built, in different contexts, with increasing complexity, is a pattern. The pattern overwhelms the fishbowl's defenses. The glass stays cracked.
The second factor is social validation — other people who see the person in the provisional identity and respond to it as real. The colleague who says "I didn't know you could build interfaces" is providing identity data that the person cannot generate alone. The recognition from others is powerful because identity is, as Ibarra insists throughout her work, a social construction. It exists not only in the person's self-concept but in the expectations and perceptions of the people around her. When the community begins to see her as someone who builds across the stack, the provisional identity gains weight. It becomes harder to dismiss as a fluke when others are treating it as a fact.
AI's role in this process is complicated. On one hand, the tool enables the repetition that Ibarra identifies as essential — the person can return to the provisional identity easily and frequently, producing the pattern of evidence that overwhelms the fishbowl's defenses. On the other hand, the tool does not provide the social validation that Ibarra identifies as equally essential. Claude does not raise an eyebrow when the backend engineer starts building interfaces. It does not say "I didn't know you could do that." It responds with the same equanimity regardless of whether the user is operating within her established identity or outside it. The frictionless acceptance is enabling — it removes the social risk that often prevents experimentation — but it is also psychologically insufficient. The provisional identity needs to be seen by others, not just by the tool, before it can take hold.
This is where Ibarra's insistence on the relational nature of identity becomes practically urgent. The engineer who experiments with AI in isolation — who builds interfaces alone, without sharing the work or the experience with colleagues — may crack the fishbowl repeatedly without the crack persisting. The experiments produce data. The data is not socially validated. The fishbowl repairs itself between sessions. The person oscillates between the provisional identity (during AI-assisted work sessions) and the old identity (during everything else), without the oscillation converging on a new equilibrium.
The convergence requires human relationships. It requires the colleague who notices the new work and responds to it. The manager who adjusts expectations and assignments based on the expanded capability. The team that begins to rely on the person in the new capacity, creating social expectations that reinforce the provisional identity. The shift from "the backend engineer who occasionally builds interfaces" to "the person who builds across the stack" happens not when the person decides it but when the community ratifies it — when the provisional identity is recognized by others as a real and reliable feature of the person's professional self.
Ibarra's research on career changers who ultimately failed to complete their transitions — who returned to the old identity after months or years of experimentation — reveals that the most common point of failure was not the absence of successful experiments but the absence of a community that validated the new identity. The person could do the new work. The surrounding network continued to see her as the old person. The gap between internal experience and external recognition became unsustainable, and the path of least resistance was to return to the identity that the community already recognized.
The implications for organizations managing the AI transition are direct and actionable. The experiments will happen. The tools make them inevitable. People will crack their fishbowls. They will discover capabilities they did not know they had. They will try on provisional identities that expand their sense of what they can contribute. The question is whether the organization will create the conditions under which these provisional identities can take hold — the recognition, the adjusted expectations, the new assignments that reinforce the emerging self — or whether the organizational structure will continue to reinforce the old identities, pushing people back into the fishbowls they have already outgrown.
Ibarra's most successful career changers were not the most talented or the most courageous. They were the ones who found environments — organizations, communities, networks — that could see them as they were becoming rather than as they had been. The environments did not create the new identity. They allowed it to survive.
In the AI age, the fishbowls are cracking everywhere. The question is no longer whether people will see through the glass. They already are. The question is whether anyone will be on the other side, looking back, ready to recognize what they see.
A traveler arrives in Tokyo for a week. She eats ramen at a counter in Shinjuku, rides the Yamanote Line during rush hour, visits Meiji Shrine at dawn when the gravel paths are still damp. She takes photographs. She has conversations, mediated by translation apps, with shopkeepers and taxi drivers. She leaves with vivid impressions, strong opinions, and a genuine affection for the city. She has experienced Tokyo.
She has not understood it. Understanding requires the weeks after the novelty fades — when the transit system becomes routine rather than wondrous, when the language barrier becomes frustrating rather than charming, when the cultural differences stop being interesting and start being inconvenient. Understanding requires staying long enough for the city to stop performing for you and start being itself, which is also when you stop performing for the city and start being yourself within it. The tourist experiences the highlight reel. The resident encounters the full text.
Ibarra's framework draws an analogous distinction in professional identity that the AI age has made urgently consequential. The distinction is between identity tourism — the experience of visiting a possible self — and identity development — the process of becoming one. Tourism produces vivid impressions. Development produces durable change. Tourism is exciting because everything is new. Development is difficult because the newness must be sustained past the point where it stops being exciting and starts being merely real.
AI has made identity tourism extraordinarily easy, and this ease is both its greatest contribution to professional possibility and its most subtle threat to professional depth.
Consider the non-technical founder who prototypes a product over a weekend with Claude Code. The product works. It has a functional interface, backend logic, the skeletal architecture of something that could serve real users. The founder has visited the identity of a builder. She has experienced the specific satisfaction of seeing an idea become an artifact. The experience is genuine. The satisfaction is real. The prototype exists.
But the identity of builder has not been developed. Development would require returning to the prototype on Monday, and on Tuesday, and on the following weekend. It would require encountering the inevitable problems that a weekend prototype contains — the edge cases that break the logic, the design decisions that seemed right at speed but reveal their insufficiency under use, the architectural choices that constrain future expansion. Development would require the willingness to be frustrated by the thing you built, to discover that building is not a single exhilarating act but an ongoing negotiation between intention and reality, between what you imagined and what actually works.
The tourist leaves before the frustration arrives. The developer stays through it. And it is the staying, not the arriving, that produces identity.
Ibarra's research on career changers identifies a pattern she describes as the "honeymoon-hangover" cycle. The initial foray into a new professional identity is almost always positive — the novelty generates energy, the early wins generate confidence, the contrast with the old identity generates relief. This is the honeymoon phase. It feels like the transition is complete. The person has found the new self. The search is over.
The hangover follows. The new role becomes routine. The easy problems are solved, leaving the hard ones. The people in the new field turn out to be as frustrating as the people in the old one, just in different ways. The identity that felt like liberation begins to feel like another set of constraints. And the person faces a choice: push through the hangover into genuine development, or retreat — either back to the old identity or forward to the next honeymoon with the next possible self.
Ibarra's studies show that the people who push through the hangover develop durable identities. The people who retreat develop a pattern of retreat — a habit of visiting possible selves without the commitment that turns visits into residences. And the habit, once established, is self-reinforcing. Each abandoned experiment makes the next abandonment easier, because the person's tolerance for the discomfort of genuine development atrophies with disuse.
AI intensifies the honeymoon-hangover cycle by making the honeymoon extraordinarily vivid and the hangover unnecessary to endure. When the next possible self is always a conversation away — when the backend engineer can try being a frontend developer today, a data scientist tomorrow, a product strategist the day after — the incentive to push through any single hangover is diminished. Why tolerate the frustration of deepening one identity when the exhilaration of sampling another is so readily available?
The result is what Ibarra's framework would identify as a new professional archetype — the serial tourist. The serial tourist has an impressive portfolio of weekend prototypes, hackathon projects, cross-domain experiments. Each one demonstrates capability. None demonstrates commitment. The resume is broad and the identity is thin.
Ibarra's diagnostic criteria for distinguishing tourism from development are specific and empirically grounded. Three markers separate the tourist from the developer.
The first marker is sustained engagement. Development requires returning to the same identity repeatedly, across different contexts and levels of challenge. A single successful experiment is a data point. A pattern of engagement — the same possible self tested under easy conditions and hard ones, in supportive contexts and challenging ones, when the work is exciting and when it is tedious — is evidence of development. The person who builds interfaces once has visited the identity. The person who builds interfaces for six months, who has struggled with responsive design and accessibility standards and the particular frustration of cross-browser compatibility, has begun to develop it. The struggle is not an obstacle to development. It is the development.
The second marker is integration into narrative. The tourist describes the experiment as an isolated event — something she did. The developer integrates the experiment into a story about who she is becoming — something that reveals a trajectory, a direction, a pattern of choices that connects to what came before and points toward what comes next. The narrative test is simple: Can the person explain why this experiment matters, not just what it produced? Can she connect it to the experiments that preceded it? Can she articulate what the experiment taught her about herself, not just about the domain?
The third marker is network engagement. Development involves sharing the provisional identity with others and allowing their responses to shape its evolution. The tourist experiments in private. The developer experiments in public — not for validation but for the specific developmental benefit of having others respond to the provisional self, challenge it, refine it, and create the social expectations that reinforce it. The backend engineer who builds interfaces and shows the work to colleagues, who invites critique, who allows the team to begin relying on her in the new capacity, is developing the identity. The engineer who builds interfaces alone and tells no one is touring.
These markers suggest a practical framework for anyone navigating the AI transition. The question to ask after an identity experiment is not "Did it work?" — AI will ensure that most experiments produce functional output — but "Am I willing to do this again when it is not exciting? Can I explain why this matters to who I am becoming? Have I shared this with someone whose response will challenge me?"
If the answers are no, the experiment was tourism. Valuable, perhaps, as reconnaissance — a scouting expedition that maps the terrain of a possible self. But not development. Not yet. Development begins when the tourist decides to stay.
The distinction is not a judgment. Tourism has genuine value. It expands the map of possible selves. It generates data about what attracts and what repels. It provides the raw material from which a person can eventually choose which identities to develop. The problem is not tourism itself but tourism mistaken for development — the belief that having visited a self is the same as having become one.
AI amplifies this confusion because it makes the visit so convincing. The output is real. The product works. The interface functions. The analysis is sound. By every external measure, the experiment succeeded. The only measure by which it may not have succeeded — the internal measure of whether the person has actually changed — is invisible to the tool, invisible to the metrics, and often invisible to the person herself. She has evidence of what she can do. She does not yet have evidence of who she is.
Ibarra's framework suggests that the AI age will produce an unprecedented abundance of possible selves and an unprecedented scarcity of developed ones — unless the structures that support development are deliberately constructed. Those structures are not technological. They are human: relationships that challenge, communities that validate, narratives that connect, and the willingness, which no tool can supply, to stay in a place past the point where staying requires effort.
The tourists will be prolific. The developers will be rare. And the rarity will be determined not by talent or access but by the willingness to remain in the same identity long enough for it to become real — to push through the hangover, to return after the novelty fades, to discover whether the provisional self can sustain the weight of a life built upon it.
The Orange Pill poses a question that cuts deeper than any productivity metric or adoption curve: "Are you worth amplifying?" The question is uncomfortable because it is personal. It does not ask about the tool. It asks about the person using it. And it implies that the answer might, for some people in some conditions, be no — not because those people lack talent but because they have not yet done the work of becoming someone whose signal is worth carrying further.
Ibarra's framework translates this provocation into the language of identity research with a precision that makes the question both more answerable and more demanding. The question is not whether you possess the right skills — AI can augment any skill set. The question is whether you have developed a working identity that produces a signal worth amplifying. And a working identity, in Ibarra's framework, is not a possession. It is a practice.
The distinction between identity as possession and identity as practice is the foundation on which this final chapter rests. The possession model says: figure out who you are, acquire the appropriate skills and credentials, and then present that identity to the world. The practice model says: identity is never finished. It is an ongoing process of experimentation, integration, and revision, conducted across the entire span of a career and never arriving at a final state. The possession model produces professionals who are brittle — well-defined but unable to adapt when the definition no longer maps onto reality. The practice model produces professionals who are resilient — less precisely defined at any given moment but capable of absorbing change and incorporating it into an evolving sense of self.
In a world where the ground shifts continuously — where the capabilities that defined value last year may not define value next year, where the boundaries between domains dissolve and reform at speeds that no credential can track — the practice model is not merely preferable. It is the only model that works.
Ibarra's research provides the components of this practice, and the preceding chapters have examined each one. What remains is to assemble them into something a person can use — not a plan (the plan-then-act model has been thoroughly discredited) but a set of practices that, maintained over time, produce the kind of working identity that can hold up under the amplifier's scrutiny.
The first practice is deliberate experimentation. Not the accidental experimentation that happens when a tool makes it easy to try something new, but the intentional decision to test a possible self that challenges the current identity. The backend engineer who experiments with frontend work because Claude makes it easy is engaged in accidental experimentation — the tool created the opportunity, and she followed it. The backend engineer who deliberately seeks out a project that requires her to lead a product strategy session — a domain she has never entered and that her current identity does not include — is engaged in deliberate experimentation. The difference matters because deliberate experiments are chosen for their developmental potential, not their ease. They push the identity in a direction that the person suspects might be important but has not yet tested.
Ibarra's research on successful career changers shows that the most developmentally valuable experiments are the ones that produce the most discomfort — not crushing difficulty, but the specific, manageable discomfort of operating outside the familiar identity. The experiments that feel easy are not experiments. They are confirmations. The identity already contains them. Genuine experimentation requires the willingness to be bad at something, to produce work that does not meet the standards of the old identity, to tolerate the gap between what you know you can do and what you are trying to do.
AI complicates this practice because it reduces the discomfort of experimentation. The backend engineer building interfaces with Claude is not experiencing the full discomfort of learning frontend development — the months of struggle, the incomprehensible error messages, the slow accumulation of competence through failure. She is experiencing a mediated version of the experiment, buffered by the tool's capabilities. The question for identity development is whether the mediated experiment generates enough discomfort to be developmental, or whether the tool's buffer reduces the experiment to a tour.
The answer, Ibarra's framework suggests, depends on the person's intention. The same tool can support either tourism or development, depending on what the person is seeking from the experiment. If she is seeking a result — a working interface, a shipped product — the tool will provide it efficiently, and the experiment may not require enough of her to produce identity change. If she is seeking understanding — why the interface works, how the design decisions affect the user experience, what the architectural tradeoffs mean for future development — the tool becomes a scaffold for deeper engagement rather than a substitute for it. The intention determines whether the experiment deposits layers of identity or skims across the surface.
The second practice is reflective integration — the discipline of pausing, after experiments, to evaluate what they revealed about the evolving self. This is not the reflection-before-action that Ibarra's framework discredits. It is reflection-after-action, the processing of experiential data that only action can generate. The question is not "Who am I?" in the abstract. It is "What did I learn about myself from doing that?" — a question that can only be asked after the doing.
The practice requires structured time. Not a great deal of it — Ibarra's research suggests that even brief periods of genuine reflection, conducted regularly, produce significant developmental benefits. But the time must be protected from the pull of the next experiment, the next prompt, the next small win. The AI ecosystem, as the Berkeley researchers documented, tends to colonize every available pause with productive activity. The reflective practice must be deliberately carved out of that colonization — a protected space in which the question "What did I learn about who I am becoming?" takes precedence over the question "What should I build next?"
The third practice is network cultivation — the deliberate construction of relationships that support identity development. Ibarra's research, confirmed and extended by decades of subsequent work on social identity, shows that the people around you are not passive observers of your identity transition. They are active participants, providing the validation, challenge, and modeling that shape the emerging self. The engineer who experiments with AI in isolation may crack the fishbowl. The engineer who experiments within a community of fellow experimenters — who shares work, invites critique, observes others navigating similar transitions, and allows the community's expectations to reinforce the provisional identity — develops the crack into a permanent opening.
The community need not be large. Ibarra's case studies show that as few as three or four relationships characterized by mutual vulnerability and honest feedback can provide sufficient identity infrastructure for a transition. But the relationships must be intentional. They must involve people who are themselves in transition, or who have recently completed one, and who can therefore model what the new identity looks like in practice. The colleague who has never experimented with AI cannot provide identity support for the person who is experimenting, because she cannot model what the new self looks like from the inside.
Ibarra's 2025 Harvard Business Review article, written with Michael Jacobides, extends this argument to the organizational level. Leaders who model personal experimentation with AI — who use the tools visibly, who share their struggles and uncertainties, who demonstrate through their own behavior that not knowing is a legitimate and temporary condition — create identity workspaces for their entire organization. The leader's visible experimentation gives permission. The permission creates safety. The safety enables the experiments that produce the provisional identities from which new working selves are eventually constructed.
The fourth practice is narrative maintenance — the ongoing construction of a story that connects the experiments, the reflections, and the emerging self into a coherent account of who you are becoming. The narrative is not a retrospective summary. It is a real-time construction, updated with each experiment, revised with each reflection, refined with each conversation in which the evolving self is shared with others.
Ibarra's research on identity narratives identifies a specific quality that distinguishes the narratives of successful transitioners from those of unsuccessful ones. The successful narratives are not smooth. They contain gaps, reversals, dead ends, and uncertainties. But they contain a through-line — a thread of continuity that connects the old self to the new, that shows how the person's core concerns (the things she cares about, the problems she finds compelling, the values she cannot compromise) are expressed differently in the new identity but not abandoned. The narrative says: I am changing how I work, but I am not changing why I work. The continuity of purpose provides the stability that the change of practice threatens to dissolve.
The senior engineer in Trivandrum, if he is to navigate the transition that AI demands, must construct this narrative. The narrative would go something like this: I have always cared about systems that work reliably under pressure. I spent twenty years building that reliability with my hands. Now I build it with my judgment — evaluating AI output, designing architectures that no tool can conceive, ensuring that the systems my team deploys will hold when the load arrives. The tools have changed. The concern has not.
This narrative is not a rationalization. It is an act of integration — the deliberate connection of a new practice to an enduring identity. It preserves the continuity of self that makes the transition bearable while accommodating the change in practice that makes the transition necessary.
Ibarra's framework, assembled across these four practices — experimentation, reflection, network cultivation, and narrative maintenance — does not describe a destination. It describes a way of traveling. The working identity worthy of amplification is not the one that has arrived at a fixed, final, polished professional self. It is the one that has internalized the practice of ongoing construction — the capacity to experiment, to integrate, to connect, to narrate, not as a one-time project but as the continuous, evolving, never-quite-finished work of a professional life lived in a world that will not stop changing.
The Orange Pill argues that AI amplifies whatever signal you bring to it. Ibarra's framework specifies what that signal must contain to be worth amplifying: not a fixed identity but a living one. Not a plan but a practice. Not the confidence of someone who knows who she is but the resilience of someone who has learned to keep discovering who she is becoming — and who has built the relationships, the reflective habits, and the narrative coherence to ensure that the discovery produces growth rather than drift.
The craft of working identity has always been the real work of a career. Most professionals performed it unconsciously, through the accumulated effects of experience and relationship and the slow friction of working in a single domain for long enough that the domain shaped them in return. AI has removed the slow friction. What remains is the work itself — stripped of its camouflage, visible for the first time as the essential practice it always was.
The amplifier does not care who you were. It cares who you are — right now, in the moment you sit down to use it. The identity you bring to the conversation determines the conversation's value. And the identity is not a given. It is a craft, practiced daily, maintained through the specific disciplines of experimentation and reflection and relationship and story.
The question is not whether you are worth amplifying. The question is whether you are willing to do the work that would make you so — not once, but continuously, in the specific, uncomfortable, endlessly generative practice of becoming.
The professional identity I wore for most of my career was stitched together in a particular sequence. First, I learned Assembler — the language closest to the machine's own thinking, where every instruction maps to a physical operation on a chip. That knowledge became my foundation, and for decades I believed the foundation was what made me valuable. When I ran teams, when I evaluated architectures, when I made the call about what to build and what to kill, I told myself the authority came from having started at the bottom of the stack and worked my way up. The identity was: I am someone who understands the machine from the inside out.
Ibarra's research suggests that the identity was real but the story I told about it was incomplete. The authority did not come from the Assembler. It came from the accumulated practice of making decisions under uncertainty — from the thousands of moments where I had to choose between approaches, evaluate tradeoffs, bet on one direction over another with imperfect information. The Assembler was the context in which those decisions were first made. It was not the decisions themselves. And the decisions — the judgment, the taste, the willingness to be wrong and learn from it — those transfer. They transfer to new tools, new domains, new identities. They are the deep structure that Ibarra's framework identifies as the transferable core of expertise.
I did not understand this distinction until I was living inside the transition this book describes. The week in Trivandrum, the thirty days before CES, the months of building Napster Station with Claude at my side — these were not just productivity stories. They were identity experiments. Each one tested a possible self I had not previously inhabited: not the builder who codes, but the builder who directs intelligence. Not the person who understands the machine from the inside out, but the person who understands what the machine should be pointed at.
The shift sounds subtle. It was not. It required me to surrender the part of my professional identity that felt most fundamental — the hands-on, in-the-code, I-know-how-this-works-because-I-built-it identity that had defined me since my teens. And the surrender was not a single moment of clarity. It was the messy, recursive, emotionally complicated process that Ibarra's research describes with such uncomfortable precision. Some days I felt like the integrator, the creative director, the person whose judgment was more valuable now that AI handled the implementation. Other days I felt like a fraud — someone who had been promoted past his competence by a tool that flattered his ideas into existence without requiring him to earn them.
Both feelings were data. That is the thing Ibarra's framework taught me that I could not have learned from any other thinker in this series. The feelings are not distractions from the transition. They are the transition. The discomfort is not a sign that something is going wrong. It is a sign that something is changing, and the something that is changing is you.
I think about the parents at my dinner table, the ones who ask what to tell their children. Ibarra's framework gives me something more honest to say than what I used to offer. I used to say: teach them to ask good questions. I still believe that. But now I would add: teach them that who they are is not a fixed thing to be discovered but an evolving thing to be crafted. Teach them that the discomfort of not knowing who they are becoming is not weakness but the specific sensation of growth. Teach them that the career they will have cannot be planned, only experimented into existence — one provisional identity at a time, tested against reality, revised, tested again.
The amplifier does not care about your resume. It does not care about your credentials or your years of experience or the identity you constructed in a world that no longer exists. It cares about what you bring to the conversation right now — the judgment, the curiosity, the willingness to experiment, the capacity to integrate what you learn into something coherent enough to build on.
The identity worthy of amplification is not the one that arrived. It is the one that is still arriving — still experimenting, still reflecting, still under construction. The craft is never finished. That used to frighten me. Now it feels like the most honest description of a life worth living.
-- Edo Segal
The AI revolution is being treated as a retraining problem. Learn the new tools, update the workflow, adapt the methods. Herminia Ibarra's three decades of research on career transitions reveal a deeper crisis hiding beneath the skills gap: an identity crisis. When the expertise you spent years building can be approximated by a machine in minutes, the disruption is not to your competence -- it is to your sense of self.
This book applies Ibarra's framework to the most consequential professional transition of our time. Through the lens of possible selves, provisional identities, and the competency trap, it maps the territory between who you were before AI and who you might become -- territory that cannot be crossed by planning, only by experiment.
The organizations retraining their people are solving the wrong problem. The real problem is that millions of professionals are standing between two identities, and no tool can make the crossing for them. This book is for anyone navigating that threshold.

A reading-companion catalog of the 30 Orange Pill Wiki entries linked from this book — the people, ideas, works, and events that Herminia Ibarra — On AI uses as stepping stones for thinking through the AI revolution.
Open the Wiki Companion →