The Stone Age Brain Meets AI

Why Your Evolutionary Psychology Is a Bigger Barrier to Adoption Than Any Technology Challenge

I sat across from Pär Edin in my podcast studio, initially a bit intimidated by his CV: 17 years at McKinsey (10 as partner), a stint as VP at Cisco, and now running KPMG's global AI initiative. When someone with that trajectory tells you something about organizational change, you should probably listen.

What he shared made me pause.

"People intellectually understand AI can make them at least 30% more productive," Pär said. "But emotionally, their Stone Age brain tells them to wait until someone else has done it first."

Wait. What?

This disconnect isn't just interesting—it's the hidden barrier most transformation efforts crash against. The same human brains that built these incredible AI systems are hardwired to resist using them effectively. The irony is delicious but expensive.

Turns out, AI adoption isn't primarily a technology problem. It's a psychology problem.

And I haven't been able to stop thinking about it since.

The Stone Age Brain at the Office

If you've watched Anders Hansen on TV talking about our Stone Age brains, you've seen how our ancient survival mechanisms shape modern behavior. What's fascinating about my conversation with Pär is how clearly these evolutionary patterns appear in corporate settings.

Here's what happens: Your brain perceives new technologies through the same risk-assessment lens that once evaluated predator threats. When AI enters your workplace, it triggers these defensive circuits. You're not consciously thinking, "This might eliminate my job"—your Stone Age brain is just doing what kept your ancestors alive: treating unfamiliar situations as threats until proven otherwise.

The problem? This instinct is completely misaligned with how innovation actually works in 2025.

"The person who gets to the ball first has an advantage in the market," Pär explained. "If everyone could do things 30% more productively, you only need a year or two before there's enormous competitive pressure."

This creates a brutal paradox: The exact psychological mechanism that once kept us safe now puts us at existential risk. Your brain's defensive posture against AI adoption might be the very thing that makes you obsolete.

The AI Productivity Time Machine

When I asked Pär about AI's impact on the consulting industry, his eyes lit up with that mix of excitement and concern that's becoming the emotional signature of our era.

"I think of AI as a time machine," he said. "Not forwards or backwards—it simply gives you more time."

This framing stopped me in my tracks. We typically think of productivity tools as allowing us to do more in the same amount of time. But what if we flipped that? What if AI's real promise is giving us back time—our most precious, non-renewable resource?

This reframing matters because it changes the calculus of what success looks like. It's not just about doing more—it's about reclaiming hours of your life.

But here's where things get messy. What happens with that reclaimed time?

"Many companies and organizations just want to free up time so their staff can feel better, have time to do what they need to do at work and at home," Pär noted.

That's great for the first wave of implementation. But then comes the real challenge: "What do I do with that 30% of time? The simplest thing is to do more of what I did before... but you'll win if you find ways to invest 10% of your time in something that's 10 times more productive for the company."

This is exactly what happened to me. AI automation freed up 1.5-2 days of my week—time I reinvested in this podcast. The time machine didn't just make me more productive; it created space for something entirely new.

From Individual Productivity to Organizational Design

The conversation with Pär took an unexpected turn when we started discussing organizational design. I realized something profound: AI isn't just changing how individuals work—it's forcing us to rethink organizational structures from first principles.

"Organizational design is becoming strategic again in a way it hasn't been in a long time," I observed.

Pär agreed: "It's fascinating. In the future, all companies will have human employees, completely digital employees, and middle managers and executives across both."

This isn't incremental change—it's a fundamental rewiring of how organizations function. The future workplace won't just have humans using AI tools; it will have AI systems with varying degrees of autonomy working alongside humans in a complex ecosystem.

Consider this mind-bending possibility: "An agent can be manager over hundreds of agents that are war-gaming different scenarios," Pär suggested. "You could have thousands or hundreds of thousands of agents doing different things in an ecosystem."

This isn't science fiction—it's a plausible near-future state that requires entirely new management models. Are we prepared for this? Almost certainly not.

The leaders who come out ahead will be those who see organizational design as a creative act rather than an administrative function. As Pär put it: "It's interesting—everyone can have the same ingredients, but this soup tastes best. Why? That's where creativity comes in."

The Reluctant Adopter's Dilemma

McKinsey's report "AI at the workplace" contained a fascinating data point: The less experience people had with AI tools, the more likely they were to have a negative "doomer" attitude about them.

This creates a nasty catch-22. Those who need the productivity benefits most are often the least likely to embrace the tools that would provide them.

I've seen this dynamic firsthand. Pär's description was spot-on: "The same person who happily uses ChatGPT at home will find excuses not to use similar tools at work."

Why? Because at work, the stakes feel higher. Your brain invents rational-sounding objections to mask its evolutionary fear response:

"It was unclear how to use it." "I need to deliver my monthly report now." "We've never done it with these tools before."

These aren't conscious lies—they're your Stone Age brain's defensive mechanisms in action. It's trying to protect you from an imagined threat by generating plausible excuses.

Fascinatingly, this resistance doesn't clearly break down along age lines. Pär noted: "We haven't seen it in the numbers. Generally, the young and more tech-savvy adopt it faster, but in many surveys, it depends much more on attitude than age."

The critical factor seems to be whether you can wrestle down your own defensive "Stone Age brain." Some young people don't bother because "not everyone is doing it yet," while some older professionals dive in because they see immense opportunities.

Virality: The Stone Age Solution to Stone Age Problems

Here's where things get interesting. If evolutionary psychology is creating the resistance, perhaps evolutionary psychology can also provide the solution.

Pär described an approach I found brilliant in its simplicity: "AI angels."

"We recommended in the beginning to have some of these AI angels who come by once a week," he explained. "Not for training, but to sit beside you and say, 'Tell me something that was really unnecessarily time-consuming this week.'"

The angel then shows how AI could help, the employee tries it, and the following week they check in again: "How did it go with that? Got another one? By the way, the person over there had a problem with this, would you like that solution too?"

This approach leverages how humans naturally learned on the savanna: by observing others performing behaviors that led to success, then copying them. It's building adoption through social proof rather than top-down mandates.

"That viral change behavior is very well adapted to the Stone Age brain," Pär noted, "but isn't usually used in most change programs."

We did something similar at our company—a 30-day upskilling course where everyone submitted their daily AI exercises in a Slack channel. This created a space for peer feedback and learning. People could see each other's prompts and results, spot small improvements, and incorporate them into their own work.

This approach bypasses the brain's threat-detection system by framing AI adoption as a tribal activity rather than an individual risk. You're not going first—you're joining the tribe.

The Awkward Existential Meeting

AI doesn't just challenge individual psychology—it creates awkward dynamics in leadership teams too.

Pär described working with an HR outsourcing company that served tech clients. When they analyzed how AI might affect their business, they discovered 40% of their revenue could disappear as clients implemented AI tools. Simultaneously, staff with AI competency commanded 23% higher fees.

This creates what Pär calls "leadership-critical" situations that most teams aren't equipped to discuss productively.

"It's a typical thing in a management team that becomes almost so uncomfortable that no one dares to talk about it," I observed.

Pär agreed: "You need people who see new opportunities in an old mature industry where there haven't been many variants before. And then you need to dare to ask these difficult questions that can be perceived as somewhat existential."

This is precisely where leadership teams often fail. The Stone Age brain doesn't just affect individual adoption—it shapes group dynamics, making certain topics emotionally unsafe to discuss even when they're strategically critical.

The best leaders create psychological safety around existential threats, allowing the team to confront uncomfortable realities directly rather than avoiding them until it's too late.

The Lingering Question: Where's the Human Story?

As our conversation drew to a close, I found myself wrestling with a question that seems absent from most AI discussions: Where's the larger human story in all this?

Right now, we're caught in a productivity race. Most discourse centers on how to implement AI faster to gain competitive advantage. But there's surprisingly little discussion about what this technology should ultimately do for humanity.

"I miss the long five, ten, twenty-year vision of what this society really looks like where we've gained the positive effects without being eaten by it somehow," I told Pär. "It can't just be a productivity race."

This tension isn't new. Pär noted historical parallels: "When industrialization came with weaving machines and everything, there was also a counter-reaction. The word 'sabotage' comes from the French word for wooden shoes. Workers went and kicked apart the weaving machines that took their jobs."

The difference today is that AI isn't just a specialization tool like assembly lines were—it's a general-purpose technology that amplifies human cognitive capability across domains.

What happens when we redirect that amplified cognition toward human flourishing rather than just economic output? Could the 30% productivity gain translate to a four-day workweek without economic loss? Should it?

As Pär calculated, "In theory, one could free up one day a week for the entire Swedish labor market. Then one can choose: do you want to reinvest that time and get 20% more productivity in the Swedish economy, which wouldn't be so bad, or do you want to get the same productivity with a four-day week?"

These are profound questions that transcend organizational strategy. They touch on what kind of society we want to build and what role technology should play in human life.

I remain cautiously optimistic, but the most important work lies ahead: ensuring this extraordinary technology serves human flourishing rather than narrowly defined economic metrics.

The Stone Age brain got us here—now we need to make sure it doesn't stop us from building something truly worthy of our future.

Previous
Previous

The Slow Death of Strategy