From Bad Bunny to the Boardroom: The Psychology of “I Don’t Belong Here Anymore” and What it Means for Your AI Strategy

There’s a pattern I can’t unsee:

When people lose their sense of identity — when they feel like they no longer matter, belong, or have a valued role — they don’t just get sad.

They get into mischief.

And that mischief comes in two flavours: loud and quiet.

Loud mischief makes the news (i.e. January 6th above). Quiet mischief makes cultures rot from the inside.

In the US, we’re watching identity threat spill out into politics and culture in real time.

And as AI moves from “cool tool” to “serious disruption”, I think we’re going to see a workplace version of the same emotional pattern unless leaders get ahead of it.

Different domain. Same wiring.

The Great Replacement theory in plain English

The “Great Replacement” is widely described as a far-right conspiracy theory that claims white populations are being deliberately “replaced” by non-white immigrants, often with a storyline about shadowy elites orchestrating it.

Its modern framing is commonly linked to French writer Renaud Camus and his 2011 book Le Grand Remplacement.

A key thing to say out loud (because nuance matters):

You can analyse the psychology of a belief without validating the truth of the belief.

And the truth claim here is not the point — the emotional appeal is.

As the Anti-Defamation League explains, this narrative has become a major driver in modern extremist ideology (even when people don’t know the name, they recognise the storyline).


Why it spreads: identity threat makes people predictable

When identity feels threatened, a few things happen fast:

  1. We simplify. Complex systems become “a plot”.
  2. We moralise. Opponents aren’t wrong — they’re evil.
  3. We tribalise. Safety comes from “my people”, not from truth.
  4. We seek control. When people feel powerless, conspiratorial explanations become more attractive.

There’s strong research showing that when demographic change is made salient, some people experience status threat, which can shift political attitudes.

And once politics becomes identity, we see what scholars call affective polarisation — not just disagreement, but emotional disgust and distrust across party lines.

This is the psychological conveyor belt from “I feel unsafe” to “I hate them”.


How it’s playing out in US politics and culture

The Great Replacement narrative doesn’t always show up as “Great Replacement”. In mainstream discourse it often travels under friendlier labels: “invasion”, “open borders”, “they’re changing the country”, “they’re taking our place”.

It’s the same structure:

  • There’s a “they” (elites / institutions / media)
  • There’s a “them” (outsiders)
  • and there’s an “us” being pushed aside

And culture becomes the battleground where people “prove” the story to themselves.

Article content
Photo credit: LA Times

The Super Bowl moment: Bad Bunny as a lightning rod

On Sunday (US time), Bad Bunny headlined the Super Bowl halftime show, sparking backlash that was explicitly framed as cultural grievance — including criticism from Donald Trump.

The criticism wasn’t really about choreography or set lists. It was about belonging. About who “America” is for.

That’s why the response was so politically charged — including an alternative “All-American” counter-event hosted by Turning Point USA featuring Kid Rock.

If you want a clean example of identity threat getting turned into a culture war: that’s it.

And here’s the bridge to business:

When people feel the world isn’t “for them” anymore, they don’t simply adapt.

They act out. They harden. They look for villains. They get into mischief.


The parallel I do want to draw (and the one I don’t)

Let’s be clear: the Great Replacement theory is a racialised conspiracy narrative. AI disruption is not.

So no, these are not morally equivalent.

But they rhyme psychologically in one crucial way:

Both are experienced as replacement anxiety.

“I’m being pushed out.”

“I’m losing my place.”

“I don’t matter anymore.”

And AI is about to trigger that feeling at scale in the workplace.


AI as a status shock

AI isn’t just changing tasks. It’s challenging people’s sense of usefulness — which is basically oxygen for identity in modern life.

And worker fear is already measurable. Pew Research Center found that about 52% of US workers say they’re worried about AI’s future impact at work, and 32% think it will lead to fewer long-term job opportunities for them.

At the same time, major institutions are forecasting heavy churn. World Economic Forum reports that employers expect significant job creation and displacement through 2030, including 92 million roles displaced, alongside net growth overall.

And the International Labour Organization highlights the nuance leaders often skip: the “overwhelming effect” of generative AI may be augmentation more than automation, but exposure is uneven — with clerical work especially exposed and gender impacts important.

Translation: people aren’t crazy to be nervous. They’re reading the room correctly.


Real-world case studies: where “mischief” starts brewing

1) Duolingo and the “AI-first” signal

Duolingo’s CEO Luis von Ahn shared an “AI-first” direction that includes gradually phasing out contractors for work AI can do, and factoring AI use into hiring and performance decisions.

Even if leadership intends “focus humans on higher-value work”, many workers hear a different message:

“Prove you’re not replaceable. Or you’re next.”

That’s not inspiration. That’s a slow drip of threat.

2) Klarna: AI hype → human reality check

Klarna announced its AI assistant was handling a huge share of customer service chats and doing work equivalent to 700 full-time agents.

Then the market taught an old lesson: customers often still want humans. Klarna later moved to ensure customers can always reach a person, effectively pulling back from a pure-AI customer service posture.

The point isn’t “AI bad” — it’s this:

If leaders communicate AI as replacement, people respond as if they’re under attack.

And the first casualty is trust.


A workplace vignette: what mischief looks like before it becomes a crisis

Picture a mid-career ops manager — call her “Jess”.

Jess isn’t anti-tech. She uses ChatGPT. She likes efficiency. She’s proud of being the person who can untangle messy problems.

Then leadership announces an “AI uplift”.

No role redesign. No pathing. No clear line between automation vs augmentation. Just vague promises and a quiet internal note that headcount will be “reviewed once efficiencies are realised”.

What happens next isn’t dramatic.

It’s death by a thousand cuts:

  • Jess stops volunteering improvements (why automate her own job?)
  • she stops flagging risks early (why be helpful if you’re disposable?)
  • meetings get colder; collaboration gets transactional
  • high performers privately job-hunt; average performers bunker down
  • the team splits into “AI pets” and “AI sceptics”

The mischief isn’t one big rebellion.

It’s the slow withdrawal of belief.

And once belief is gone, you can’t “change-manage” your way out with a poster and a town hall.

(Also: good luck with your culture survey. It will lie to you. People will tick “neutral” and then resign.)


What could happen if leaders mishandle the AI transition

If you trigger identity threat at scale, you don’t just get productivity issues.

You get internal culture wars.

Here’s what tends to follow:

  1. The story becomes personal
  2. Tribalism becomes a coping mechanism
  3. Scapegoats appear
  4. Quiet sabotage grows

There’s a deep research base on how change perceived as threatening fuels resistance and negative outcomes.


The playbooks: how leaders prevent “replacement anxiety” at work

Playbook 1: Executives — Don’t create fear you can’t contain

Your job: turn uncertainty into legibility and agency.

  • Name the purpose without euphemisms.
  • Say what you will not do.
  • Separate automation from augmentation publicly.
  • Share the gains.
  • Make fairness visible.

Exec line to steal:

“We’re not asking you to compete with AI. We’re redesigning work so you can do the parts that require judgement, context, and trust — and we’ll show you exactly how.”


Playbook 2: HR / People & Culture — Build pathways, not pep talks

Your job: keep identity intact during transition.

  • Create “role-to-task” redesign workshops.
  • Publish transition pathways.
  • Protect dignity.
  • Measure the real risk: perceived replaceability.
  • Stop “AI theatre.”

HR line to steal:

“Your role will change. That doesn’t mean you’re disposable. Here are the options — and here’s how we’ll support each one.”


Playbook 3: Team Leaders — Translate strategy into safety

Your job: reduce fear locally before it turns into cynicism.

  • Run weekly “what’s changing / what’s not” check-ins.
  • Co-design how AI fits into workflows.
  • Make skill-building social.
  • Name the feelings without getting weird.
  • Protect the humans-only moments.

Leader line to steal:

“I’m not measuring you against the tool. I’m measuring how you use the tool to create value — and how you show up for the parts that only humans can do.”


The bottom line

The US didn’t become divided because people suddenly got stupid.

It became divided (in part) because huge groups of people felt unseen, unsafe, and replaceable — and then found stories that made that pain feel organised.

AI is about to push that same emotional button inside organisations.

The companies that thrive won’t just have the best models.

They’ll have leaders who can honestly say:

“We are changing — and you still have a valued role to play in what comes next.”

And if leaders don’t do that?

Well… people will still find a way to feel powerful again.

That’s where the mischief starts.


I published a more cerebral version of this article on Substack. Read that here.

Scroll to Top
AI BUSINESS FUTURIST MOTIVATIONAL SPEAKER Kim Seeling smith