leading through ai anxiety (1)

How to lead through AI anxiety without losing your people

Before we talk about your team’s relationship with AI, there’s something worth acknowledging about yours. Most leaders are managing their own AI anxiety the only […]

Before we talk about your team’s relationship with AI, there’s something worth acknowledging about yours.

Most leaders are managing their own AI anxiety the only way that feels safe: by projecting certainty about AI’s promise that they don’t genuinely feel, performing confidence in its benefits while privately wondering what it all means for them. For their expertise. For their identity as a leader.

Research by Mercer in 2025 found something almost no one is talking about: managers and executives are more anxious about AI reshaping their roles than frontline workers are. The most anxious people in the room are the ones expected to lead everyone else through it.

That performance has a cost. McKinsey’s 2025 research found that 76% of executives believe their employees are enthusiastic about AI. Only 31% of employees agree. That 45-point gap is what performed certainty produces: a culture where everyone says the right things, and nobody says what’s real.

This article is an invitation to set that performance down.

You can’t lead others through terrain you haven’t acknowledged in yourself. And your team is navigating terrain that deserves more than confidence. It deserves honesty.

Research briefing
The AI anxiety landscape
79%
of employees have significant AI-related anxiety
Cross-national study, cited HBR
52%
of workers feel worried about AI in their workplace
Pew Research, 2025
27%
of UK employees fear losing their job to AI within 5 years
Guardian / Randstad, 2026
44%
fear AI will make them less skilled — not just unemployed
HBR-cited study
The single most common emotion workers feel about AI isn’t excitement. It’s worry.

 

What AI is actually taking from people

The standard conversation about AI at work focuses on job security. But that’s not what’s driving the deeper distress.

What AI threatens, for a great many people, isn’t their role. It’s who they are in it, the craft, the expertise, the years of mastery built into a professional identity. That identity feels, suddenly, like it might not be worth what it was.

Research cited by Harvard Business Review put a number on it: 44% of employees fear AI will make them “dumber”, not redundant, cognitively and professionally diminished. A fear not of losing the job, but of losing what made them good at it.

Researchers at the University of Florida have named this pattern AI Replacement Dysfunction (AIRD): anxiety, insomnia, paranoia, and profound occupational identity loss triggered not by actual redundancy, but by the perception of risk. A worker whose role hasn’t changed can still carry the full psychological weight of someone who has been replaced.

That experience doesn’t wait for a redundancy notice. Workers equate the prospect of AI replacement with personal failure, a loss of self-worth that extends well beyond their employment status.

Gallup’s “Great Detachment”, global engagement at its lowest point in a decade, shares the same root. Not apathy. Unacknowledged loss.

We’re calling grief “resistance and wondering why change stalls

Think about the behaviors you’re seeing in your organisation right now.

Knowledge hoarding: the expert who used to share their process freely, now guarded and reluctant. Performative compliance: the team member who says the right things in meetings but hasn’t changed how they actually work. Withdrawal: the previously engaged person who is present, but somewhere else. Quiet resistance to new tools, even when the business case is clear.

The standard response is to design change programmes to address it. Better communication. Clearer business cases. More training, stronger incentives. And when those don’t land, a quiet reframing of the resistance as an attitude problem.

But what if the diagnosis is wrong?

Those behaviors aren’t resistance. They’re grief. Times Higher Education mapped the Kübler-Ross five stages directly onto what leaders observe in teams navigating AI adoption. The fit is striking.

The AI Grief Curve

Primeast’s model, built on this research , maps what each stage actually looks like in a team, and what a grief-literate leader does at each one.

Primeast Framework
The AI Grief Curve
Built on Kubler-Ross research applied to AI adoption — a diagnostic tool for leaders

Stage 1
Denial
Stage 2
Anger
Stage 3
Bargaining
Stage 4
Depression
Stage 5
Acceptance

What it looks like in your team
Dismissing AI as overhyped. “It won’t affect us.”
Knowledge hoarding. Vocal resistance. Blaming leadership.
Selective adoption. “I’ll use it for X but not Y.”
Withdrawal. Presenteeism. Loss of initiative.
Curiosity about possibilities. Active engagement.

What leaders usually do (wrong)
Ignore it — “they’ll come round”
Clamp down, mandate, communicate harder
Accept partial adoption as success
Miss it entirely — looks like disengagement
Claim the whole team is here when they’re not

What a grief-literate leader does
Name it. Create space for honest conversation.
Sit with it. Acknowledge what’s being lost.
Involve people. Co-create the adoption path.
Reframe honestly. Acknowledge stakes without minimising loss.
Model curiosity. Celebrate progress without overstating it.

Common leader errors
Grief-literate response
A Primeast Framework

The crucial insight isn’t the framework itself. It’s what it reveals about misdiagnosis.

Kyndryl research found that nearly half of CEOs say most of their employees are resistant or hostile to AI-driven change. But the top obstacles those same CEOs reported? Lack of change management, low trust, and skills gaps. Those aren’t the obstacles of a resistant workforce. They’re the obstacles of a grieving one.

The cost of misdiagnosis is a wrong intervention at every turn: applying a motivation fix to someone in Depression; running skills training for someone in Anger; reframing adoption as opportunity for someone still in Denial. Dr Brittany Straton, Senior Lecturer in Cyberpsychology at Arden University, states it plainly: “Workers who withdraw, resist new technology, or hide knowledge are not being obstinate. These are stress responses rooted in identity protection.”

That’s not a change management problem. That’s grief.

 

The contract that was already broken

A second dimension explains why the organization’s response so often amplifies the grief rather than containing it.

Every employee carries an unwritten contract with their employer. Not the employment contract, something deeper. An implicit bargain: I bring you my loyalty, my expertise, the years I’ve invested building this craft. In return, you give me security, development, and recognition that my professional identity matters here.

Nobody writes it down. But everyone knows it exists. And everyone knows when it’s been broken.

AI adoption, handled without transparency, without consultation, without honest acknowledgement of what it means for people’s roles, breaks it. Research from Brunel University found that this creates what researchers have called an “alienational” psychological contract: a new category, worse than either relational or transactional, in which engagement and trust collapse because the implicit bargain feels fundamentally violated. Workers no longer feel valued as people with craft and identity. Only as productivity units that can be partially replaced.

JFF’s 2026 national survey found that 56% of workers say their employer has not consulted them on how AI is deployed in their roles. More than half the workforce excluded from a decision that will reshape their professional identity.

That’s not a technology rollout problem. That’s a betrayal problem.

The mechanism isn’t AI itself. It’s the silence around it. As we explored in our article on values alignment, and in our piece on what change fatigue is really telling you, the psychological contract fractures the moment an organization’s actions diverge from its implicit promises, long before anyone resigns. AI adoption, handled without transparency or consultation, is doing exactly that right now.

Trust architecture
How AI adoption breaks the psychological contract

The unwritten deal
What employees have always believed
I give you: My loyalty, commitment, and expertise built over years
You give me: Security, development, and recognition that my professional identity matters here
The Psychological Contract

But then…

What AI rollouts do to it
56%
of workers have never been consulted on how AI is deployed in their roles
JFF, 2026 • Brunel University
Workers no longer feel the organisation values them as people with craft and identity. Only as productivity units.
Brunel University study, 232 employees

The result…

Trust collapse
Leading theme in 1,454 first-person accounts:
“Shattered trust and corporate betrayal”
Frontiers in Psychology, 2026
Knowledge hoarding
Withdrawal and disengagement
Performative compliance
That’s not a technology rollout problem. That’s a betrayal problem.

The silence gap

So how silent have leaders actually been?

Mercer’s research found that fewer than 20% of employees had ever heard from their direct manager about AI’s impact on their role. The silence runs from the top down: from HR, from the CEO, from every tier expected to provide clarity.

That silence isn’t neutral. In the absence of honest conversation, fear doesn’t stay still, it fills the vacuum with worst-case assumptions. And when senior leaders do eventually communicate about AI without involving their teams, employee concern actually rises. Top-down announcements without dialogue make the anxiety worse, not better.

Meanwhile, EY’s 2025 Work Reimagined Survey of 15,000 employees found that 88% are already using AI at work, but only 12% are receiving sufficient training to use it effectively  and that proportion is falling.

People are using tools they don’t fully understand, with no conversation about what those tools mean for their roles, from leaders who aren’t talking about it.

That’s the environment your team is navigating. And it’s the environment you have the most direct power to change.

The Grief-Literate Leader: five habits

The leaders who navigate AI anxiety best aren’t those with the most detailed AI strategy or the most confident public position. They’re the ones who have developed what we call grief literacy: the capacity to recognize what their people are genuinely experiencing, meet them where they are, and create the conditions for real adaptation, rather than performed acceptance.

Here’s what that looks like in practice.

1. Name it

The most powerful thing a leader can do in the face of collective anxiety is name it. Not manage it, resolve it, or reframe it, just acknowledge that it exists.

Making AI anxiety discussable is not the byproduct of psychological safety. It’s the precondition for it. Amy Edmondson’s research at Harvard Business School consistently finds that in uncertain times, “the act of naming the challenge puts everyone on the same page.” This doesn’t require having answers. It requires acknowledging that the question exists.

Practical language worth trying: “I don’t have all the answers about how AI will change this team’s work and I’m not going to pretend I do.” Or: “I want to hear what feels uncertain or threatening to you. There are no wrong answers.” Or: “Some of what AI does well overlaps with things you’ve built real expertise in. That’s worth acknowledging, not glossing over.”

Naming it doesn’t invite catastrophizing. It signals that you’re willing to have the conversation and that the conversation is safe to have.

2. Sit with it

The instinct to reassure is understandable. Applied too early, it’s also counterproductive.

Before anyone can move forward, the loss needs to feel real and acknowledged. Jumping to “but here’s the opportunity” before the grief has been witnessed is the leadership equivalent of saying “at least…” to someone in genuine pain. It closes the conversation rather than opening it.

Boise State research from 2025 found that transformational leaders do successfully buffer AI anxiety by framing it as an opportunity for growth, but only when they simultaneously acknowledge what is genuinely being lost. Honest optimism, not toxic positivity.

What sitting with it looks like: holding space in a team meeting for concerns without immediately pivoting to solutions. Asking “what’s the hardest part of this for you?” and waiting for the real answer, not the managed one.

3. Involve people

Co-creation isn’t a cultural nice-to-have. It’s the most powerful tool available for repairing a breached psychological contract.

When people have genuine say in how AI enters their working lives, the dynamic shifts. They stop being recipients of a decision someone else made and become co-authors of a future they have a stake in succeeding. McKinsey’s change management data shows that companies involving employees in AI transformation, even 21-30% of the workforce, double their chances of positive outcomes. The mechanism isn’t efficiency. It’s ownership.

Three practical starting points: hold “AI listening sessions”, structured conversations whose purpose is to hear, not to transmit. Create “AI design squads” where frontline employees identify use cases before leadership decides. Ask regularly: “What parts of your role do you most want to protect from automation and why?”

When Morgan Stanley deployed its AI assistant, it achieved 98% adoption among wealth management teams by letting trust, not the technology, set the pace. McKinsey’s internal AI platform, Lilli, reached 92% adoption globally through the same principle: invitation, not instruction. Senior leaders asked “have you asked Lilli?” in team meetings. New hires were onboarded into its use from day one. The tool became part of the team’s identity, not something imposed upon it.

Primeast’s work with Kia Motors UK was built on the same logic: equipping managers to involve their teams in navigating constant sector disruption, rather than managing them through it from above. The programme has now run for six consecutive years.

4. Reframe honestly

The standard message, “AI is full of opportunity” , doesn’t land for people in grief. When the loss is still present, a message built around gains can’t reach them. Telling someone who feels professionally diminished that they should feel excited isn’t reframing. It’s dismissal.

Kahneman and Tversky’s prospect theory established that people experience losses as roughly twice as painful as equivalent gains feel rewarding. You can’t message your way past that asymmetry. But you can work with it — by reframing around what’s at stake if we don’t adapt, rather than what we stand to gain if we do.

The honest reframe sounds like this: “We need to get good at this not because it’s exciting, but because the cost of not doing so is falling behind in ways that hurt everyone here.”

That’s not fear-mongering. It’s an honest acknowledgement of stakes, combined with a credible offer of partnership.

5. Model curiosity

One of the most counterproductive things a leader can do is perform certainty about AI they don’t feel. It signals that uncertainty is unsafe to express, precisely the opposite of what’s needed.

HBR’s January 2026 analysis draws a direct parallel to pandemic leadership: the leaders who helped their people most weren’t those with the best answers. They were the ones who modelled how to navigate without them.

What modelling curiosity looks like: sharing openly when an AI tool surprised or failed you. Asking your team “what should I know about how AI is affecting your day-to-day work?” as a genuine question. Saying “I’m learning too” and meaning it.

The role isn’t AI champion or cheerleader. It’s sense-maker: helping your team build a shared understanding of what’s changing, what it means, and what honest leadership actually looks like in the middle of it. That doesn’t require being ahead of your team on AI. It requires being honest about where you actually are.

Primeast Framework
The Grief-Literate Leader
Five habits for leading through AI anxiety
A Primeast Framework

1
Name it
Make AI anxiety discussable. Naming it is the precondition for psychological safety — not the byproduct.
Try: “I don’t have all the answers — and I’m not going to pretend I do.”

2
Sit with it
Before anyone can move forward, the loss needs to feel real and acknowledged. Don’t rush to opportunity.
Try: “What’s the hardest part of this for you?” — and wait for the real answer.

3
Involve people
Co-creation repairs the broken psychological contract. Employees with voice are 2x as likely to report high job satisfaction.
Try: AI listening sessions. AI design squads. Ask what people want to protect.

4
Reframe honestly
The “AI is full of opportunity” message fights the emotional physics of the room. Loss-framed messages reach anxious audiences better.
Try: “We need to get good at this — not because it’s exciting, but because the cost of not doing so hurts everyone here.”

5
Model curiosity
Performing certainty about AI you don’t feel signals that uncertainty is unsafe. The leaders who helped most in the pandemic didn’t have better answers — they modelled how to navigate without them.
Try: Share openly when AI surprised or failed you.

What grief-literate leadership actually delivers

This isn’t just the right thing to do. It’s the commercially significant thing to do.

BCG’s 2025 AI at Work survey found that the proportion of employees who feel positive about AI rises from 15% to 55% with strong leadership support, a 40-point shift driven entirely by leadership behaviour, not the technology. Meanwhile, EY research estimates organizations are missing up to 40% of available AI productivity gains because of workforce tensions that skilled leadership could address.

The gap between organizations that get this right and those that don’t isn’t their AI strategy. It’s the quality of their leadership relationships.

Grief-literate leadership doesn’t slow adoption down. It’s the only approach that makes adoption actually stick.

The bravest thing you can say

Five years ago, during the pandemic, leaders were briefly given permission to say “I don’t know what’s coming.” The scale of the uncertainty earned that permission. And the leaders who took it, who showed up honestly, stayed present, and said “I’m here even without the answers”, were exactly what their people needed.

AI is the defining professional uncertainty of this era. The scale qualifies for the same response.

The leaders who navigate this moment best won’t be those with the most detailed AI roadmap. They’ll be the ones who earn enough trust to figure out the roadmap together with their people.

The bravest thing you can say to your team right now might also be the most effective: “I don’t know, but I’m not going anywhere while we figure this out together.”

To develop grief-literate leadership across your organization, Primeast exists to build exactly this capacity in leaders and teams. Speak to the Primeast team to find out more.

Free resource

Want to go deeper?

Download The HR Leader’s Playbook for Sustainable
Leadership Development
: the evidence-based blueprint for leadership change that actually sticks. Free.

More Insights