Why AI might force you to become more human

BEYOND PROCESS

I recently sat down with Ashkan Fardost, a technology philosopher who grew up with modems before most people knew what the internet was. Our conversation began with information technology's historical impact on human society but quickly evolved into something far more provocative: a challenge to how we think about work itself in the era of AI.

What if our relentless drive to break everything into processes – the very foundation of modern professional life – is about to hit its limit? And what if the path forward isn't more process, but a return to something much older?

The information revolution we're not talking about

"We are informational beings," Ashkan explained early in our conversation. "We are in symbiosis with a technology that exists outside our bodies – language. We're dependent on it because we can't survive with only the information that's in our DNA."

This perspective shifts how we should think about artificial intelligence. Rather than seeing AI as something alien or unprecedented, it becomes just the latest in a long line of information technologies that have fundamentally reshaped human society.

Ashkan traces this lineage back to our first information technology: spoken language itself. Before writing, before the printing press, before computers, it was language that first allowed humans to store essential knowledge outside our genes. This created a distinct layer of information – culture – that made human civilization possible.

Each subsequent information revolution – writing, printing, telegraphy, computing, the internet – has restructured how our brains delegate information processing, changing not just what we know, but how we think.

This isn't mere theorizing. Ashkan points to the profound differences between Western and Eastern thought traditions as evidence of how information technologies shape cognition. The phonetic alphabet that emerged around 700 BCE in the West – breaking language into meaningless sound components that could be recombined in infinite ways – laid the foundation for Western analytical thinking.

"Think about all the ideas we consider fundamental to Western society," he noted. "Democritus's theory of atoms – breaking matter into components that can be recombined in sequence. Aristotle's logic – breaking truth into components. Euclidean geometry – breaking form into components. The printing press, factories, industry – they all follow the same pattern."

The Western mind became exceptional at breaking reality into components, freezing moments in time, analyzing patterns, and building models. It gave us modern science, but it also suppressed other ways of thinking.

Wait, that can't be right... but actually, maybe it is? Maybe much of what we consider "rational" thinking is actually just one particular style of cognition, linked to one particular way of processing information. And maybe AI is about to push us toward something different.

The mediocrity problem

When our conversation turned to AI's immediate impact on jobs, Ashkan's perspective wasn't what I expected.

"At best, these systems will be mediocre," he said.

By "mediocre," he didn't mean technically unsophisticated. Rather, he was pointing to a fundamental limitation: AI systems like ChatGPT are vast statistical models built on existing human-generated text. They're "pathetic prisoners of the past," unable to create anything genuinely new.

Take Hemingway's revolutionary writing style. Before him, no one wrote novels using the definite article in his distinctive way. "If I went back in time with ChatGPT and asked it to write a revolutionary book in a new style that would make me a respected author, it could never produce anything like Hemingway's work," Ashkan explained. "There would be no data on that style until Hemingway himself published it."

This is where Ashkan's perspective gets particularly interesting for business leaders. The tasks most vulnerable to AI automation aren't those requiring the highest education or intelligence – they're those that follow established patterns with abundant previous examples.

Consider legal work. For standard cases with clear precedent, AI systems can already do remarkably good preparation work. The mediocre lawyer who primarily handles routine cases should be worried. But the exceptional lawyer who takes on unprecedented, complex cases requiring novel thinking? They're likely to thrive in an AI-augmented world.

This pattern holds across fields. Customer support? Easily automated for standard issues. Medical diagnosis? Increasingly automatable for common conditions with clear symptoms. Content creation? AI excels at producing passable but generic material.

The core insight: if your work is process-based and can be learned from existing examples, AI will increasingly be able to do it cheaper and faster. The question is – what does that mean for us?

From jobs to roles

"I think we belong to a parenthesis in history where humans have jobs instead of roles," Ashkan said toward the end of our conversation.

This struck me as profoundly insightful. Before industrialization, humans didn't have narrowly defined jobs with precise descriptions and schedules. They had roles within their communities. If you were muscular and agile, you might be part of the hunting party – but you didn't hunt 8-5, Monday through Friday. When not hunting, you contributed in other ways.

"In the first organizational unit – the tribe – everyone had a role," Ashkan explained. "Everyone had a foot in almost every aspect of the organization. From childbearing, where you'd help and provide support if you were available, to gathering, to everything else."

This meant that no one was easily replaceable. You couldn't post a job listing with specific requirements and expect to find someone who could step in. A role wasn't something you carried in your backpack – it was something you grew into over time, through deeply contextual learning and relationship building.

The industrial revolution changed this. As we broke down production into standardized, repeatable processes, humans became interchangeable components in the machine. This was necessary in a world where you needed a specific person sitting at a specific station, performing a specific task for 8 hours a day.

But what happens when we can outsource most process-based work to machines?

"Digital technology in general, and AI in particular, forces us back into a world where role-bearers become much more valuable," Ashkan suggested. "In a digital information environment, information flows so quickly that everything happens everywhere, all at once. You can't freeze reality, break it down into a model, and fit it into some Excel-like process."

This environment demands something different from us. While certain aspects of business still benefit from systematic approaches (Ashkan noted that he "doesn't want to see much creativity" in quarterly financial reports), much of our work now requires immediate, intuitive decision-making that can't be reduced to a formal model.

The intuition renaissance

Throughout our conversation, Ashkan returned to the conflict between rational and intuitive thinking. In the Western tradition, we've elevated rationality while dismissing intuition as "fluff." Yet research using brain imaging shows that many capabilities we consider hallmarks of intelligence – programming, pattern recognition, creativity – rely heavily on intuitive circuits.

This explains a paradox many of us have observed: despite an explosion of management books over the past 50 years, the per capita number of truly exceptional leaders hasn't increased. The same applies to creativity and teaching. We can't seem to formalize these capabilities into reproducible processes, no matter how much we try.

"If you ask skilled leaders how they solved a particular situation or why they made a certain decision that went against all rationality, they often say 'I don't know, it just felt right – gut feeling, I just knew,'" Ashkan observed. "If you really press them, they might eventually try to explain it rationally, but if someone else tries to copy that explanation, it doesn't work."

What Ashkan is describing is tacit knowledge – the kind that can't be fully articulated or transmitted through formal instruction. It's developed through practice, immersion, and feedback from reality. This is why the world's best leaders, creatives, and teachers can rarely explain precisely how they do what they do.

AI might be pushing us to rediscover the value of this knowledge. As machines take over more process-based work, the most valuable human contributions will increasingly come from our intuitive capabilities – the very ones that can't be reduced to algorithms.

The leadership implications

For leaders, this shift suggests several critical priorities:

First, recognize that your greatest value likely comes not from following processes but from those moments when you operate on intuition – when you sense the right approach without being able to fully articulate why. Rather than downplaying these moments as unscientific, cultivate them.

Second, help your people grow into roles rather than simply performing jobs. This means creating opportunities for them to develop contextual understanding across different aspects of the organization, build relationships, and receive direct feedback from reality.

Third, be skeptical of attempts to reduce everything to formal processes. While processes have their place, they're most effective for stable, predictable environments – which are becoming increasingly rare in our digital world. The most valuable human work happens at the edge of what can be proceduralized.

Finally, reconsider how you measure value. "When you're sitting and approving time reports," Ashkan noted, "everyone ultimately knows you can't measure what people produce by the number of hours they put in for many types of work." As AI makes process work less valuable, we need metrics that capture the contextual, intuitive contributions that machines can't replicate.

The lingering question

As our conversation concluded, I found myself reflecting on a question that feels increasingly urgent: What if the path to becoming AI-proof isn't becoming more machine-like, but more deeply human?

For decades, we've pushed ourselves to be more systematic, more analytical, more process-driven – essentially, more like computers. We've structured our organizations around the belief that anything worth doing can be broken down, measured, and optimized.

But as machines become better at this kind of thinking, perhaps our advantage lies in the directions we've neglected: intuition, contextual understanding, relationship-building, creativity that transcends existing patterns.

This doesn't mean abandoning rationality or process entirely. Rather, it suggests a rebalancing – recognizing that human cognition is wider and deeper than what we've prioritized in our industrial and early digital economies.

The most successful leaders in the AI age may be those who can integrate analytical and intuitive thinking – using machines to handle process work while cultivating the uniquely human capabilities that allow them to navigate complexity, build trust, and create genuine novelty.

In essence, as our machines become more powerful, our most valuable work might involve becoming more fully what we already are: humans with roles, not just jobs.

Previous
Previous

Why Every Business Is Now a Tech Business

Next
Next

The Authentic Leadership Paradox