Back

The most important AI decision is what to protect rather than automate

Innovation
Liam Hoffman

Without intentional design, as AI improves in its outputs, the default human behaviour with AI will be to over-delegate. And that's the pattern that produces the worst outcomes while building the least capability.

That's the consistent finding across the cognitive science and the usage data from the AI labs themselves. People are starting to naturally gravitate toward handing things over entirely rather than thinking alongside AI. It's the path of least resistance, and most organisations aren't designing against it.

Over the past year, I've worked with executive teams across industries to design AI strategies, build governance frameworks, run pilots and build capability. And the pattern I keep encountering isn't resistance to AI. For the most part, it's the opposite. People are embracing it so readily that many organisations are sleepwalking into a capability problem. Tools are being adopted. Efficiency gains are visible at the individual level. But something more consequential is also happening. The cognitive capabilities that underpin judgment and critical thinking are beginning to atrophy.

This article is about that risk, and about how leaders can address it, especially for younger generations entering the workforce. Not by pulling back from AI, but just by being more intentional about where and how it's used.

Mollick’s Equation, and an Opportunity to Extend It

In January 2026, Ethan Mollick published a piece called Management as AI Superpower. His core argument is that as AI becomes more capable and more agentic, the skills that matter most shift away from technical proficiency and toward management, or knowing how to delegate, how to give clear instructions, and how to evaluate output. It’s a compelling argument, and I think he’s largely right.

Mollick introduces what he calls the Equation of Agentic Work. It frames the decision to delegate to AI around three variables: how long the task would take you to do yourself, how likely the AI is to produce an output that meets your standard, and how long it takes to prompt, wait, and evaluate the AI’s output.

This is a useful efficiency framework, and Mollick is right to build it around speed and output quality. His point is that management skills are becoming the key differentiator in an AI-enabled world. I agree. The study he references showed that GPT-5.2 tied or beat human experts 72% of the time on complex professional tasks, making the economic case for delegation stronger than ever.

But I think there’s an opportunity to extend the framework further. Mollick’s equation optimises for productivity, and for that purpose it’s excellent. But there’s a fourth variable that matters enormously for organisations thinking about long-term capability being the cognitive development value of the work itself. Some tasks are worth doing yourself because doing them is how your people develop judgment and expertise.

The Cognitive Science of What We’re Losing

To understand why this matters, it helps to look at what’s happening inside people’s heads when they use AI, and what happens when they stop.

Cognitive Load Theory, first developed by John Sweller in the late 1980s, distinguishes between three types of mental effort. Extraneous load is friction and busywork, the cognitive equivalent of admin that drains capacity without building capability. Intrinsic load is the genuine difficulty of the task itself. And germane load is the effort that builds understanding, where schema formation happens and expertise is developed.

AI is exceptional at reducing extraneous load. It can take tedious, friction-heavy tasks and handle them in minutes. But the research is now showing that AI can also strip out germane load, the productive struggle that develops expertise and independent thinking over time. A 2025 opinion paper published in Frontiers in Psychology put it directly: reducing extraneous load is beneficial, but technology that reduces germane load has a negative impact on deeper engagement.

This is a critical distinction for leaders and managers. When AI removes the busywork from your team’s day, that’s a win. When AI removes the thinking that builds their capability, that’s a risk.

MIT’s Media Lab “cognitive debt” study has become one of the most popular cited pieces of AI research in the last year, and I’ll freely admit I leaned on it in my previous posts alongside what feels like every other LinkedIn commentator in this space. But there’s a reason it resonates. The finding that AI users showed reduced neural engagement and couldn’t recall key points from their own work minutes after completing it speaks to something leaders instinctively feel but struggle to articulate. And the less-cited follow-up finding is arguably more important: when those same participants were asked to work without AI, they showed signs of cognitive under-engagement. The tool hadn’t just made the task easier, It had made the person less capable of doing it independently. More research is needed to fully understand the long-term implications, and I hope we see longitudinal studies that track this over years rather than months. But the early signal is clear enough to take seriously.

The broader research reinforces this. A Swiss Business School study of 666 participants found a strong negative correlation between AI usage frequency and critical thinking scores, with the effect most pronounced in younger workers. The recent follow-up to the landmark Harvard/BCG consulting study identified three collaboration patterns: Cyborgs (who integrated AI throughout their workflow), Centaurs (who strategically divided tasks between themselves and AI), and Self-Automators (who handed tasks wholesale to AI and stepped back). Self-Automators, who made up 27% of participants, produced the least accurate and least persuasive work. Centaurs achieved the highest accuracy. The researchers were blunt: when employees default to Self-Automator behaviour, organisations may be inadvertently hollowing out the expertise that creates competitive advantage.

And this drift toward over-delegation is also showing up in the research the AI labs themselves are documenting. Anthropic’s Economic Index, which analyses millions of conversations on their platform, has tracked a consistent shift toward automation over collaboration. Directive task delegation, where users hand complete tasks to AI with minimal back-and-forth, rose from 27% to 39% in just eight months. Their most recent report found that while augmentation has ticked back up slightly, the longer-term trend still favours automation. OpenAI’s consumer usage research, based on roughly 1.5 million conversations, found that 40% of all ChatGPT usage is now task-oriented: users prompting AI to write, code, plan, or complete practical work. The enterprise data is even more stark, with organisations steadily moving from asking AI for outputs to delegating entire workflows.

The convergence across lab research, field experiments, and real-world platform data tells a consistent story. Without intentional design, the path of least resistance with AI is to hand things over entirely. People naturally drift toward the collaboration pattern that demands the least from them and it’s also the one that builds the least capability.

The Hidden Costs are Attention, Autonomy, and Motivation
There are two other dimensions to this that often get overlooked, and both have implications for how leaders think about AI integration.

The first is attention. Cal Newport’s work on deep focus has shown that the highest-quality knowledge work happens in sustained, uninterrupted blocks. Research consistently shows it takes an average of 23 minutes to fully regain focus after a task switch, and cognitive psychologist Sophie Leroy’s work on “attention residue” shows that even a brief switch leaves traces in working memory that degrade performance on the next task. The typical AI workflow is inherently a task-switching workflow: draft a prompt, wait, evaluate, revise, compare. Each transition is a micro-interruption. For work where deep, sustained thinking is required, this constant back-and-forth may actually undermine quality, even as it speeds up individual steps. That doesn’t mean AI should be avoided for complex work, but I think it does mean leaders need to think about whether the way their teams interact with AI is supporting or fragmenting the deep focus that their most valuable work requires.

The second is motivation. Self-Determination Theory, one of the most established frameworks in motivation psychology, tells us people need autonomy, competence, and relatedness to be intrinsically motivated. AI can enhance the feeling of competence in the short term, and in many cases people genuinely are more capable with AI in their hands. But there’s also an important distinction between perceived competence and actual competence. If people are producing higher-quality outputs but developing less independent skill in the process, their sense of competence becomes fragile. It’s contingent on the tool being available rather than being built into the person. AI can also erode autonomy if poorly implemented like when the default workflow is to accept AI suggestions without critical evaluation, people lose their sense of agency over the work.

For leaders, the takeaway from both of these dimensions is the same. AI integration is more than just a productivity question. It’s also an engagement and development question. Get it wrong, and you don’t just get worse outputs. You'll get a less motivated, less capable team.

A Framework for Leaders: Automate, Collaborate, Protect
So what should leaders actually do with all of this? The answer isn’t to slow down AI adoption. It’s to be more intentional about it. I believe the organisations that will get the most out of AI, and the most out of their people, will be the ones that categorise work through a different lens.

Automate: Strip Out the Friction

Tasks that are primarily extraneous cognitive load. Admin, data formatting, routine reporting, scheduling, document cleanup, status updates. These tasks consume energy without building capability. They’re where AI delivers the clearest ROI, and where delegation to AI should be encouraged and systematised. This is where Mollick’s equation works perfectly. The Human Baseline Time is high, the AI probability of success is high, and the cognitive development value of the task is low. Automate it.

Collaborate: Use AI as a Thinking Partner

Tasks where AI accelerates the work but human judgment remains essential. Research and development, complex analysis, communications, market research, scenario modelling etc.. The human stays in the analytical driver’s seat, using AI to extend capability rather than replace it. The key principle here, one I come back to often, is that AI should be in the loop with us, not the other way around. Use AI for initial research and synthesis. Use it to stress-test assumptions. Use it to generate options. But keep the judgment, the interpretation, and the final decision with the person.

Protect: Preserve the Thinking That Builds Capability

This is the category most organisations are missing, and I believe it’s the most important one for long-term organisational health.

Some tasks are valuable not despite being difficult, but because they’re difficult. Problem-solving under uncertainty, where the answer isn’t clear. Creative ideation from first principles, before AI narrows the frame. Developing junior team members’ analytical judgment through hands-on work. Making sense of ambiguous qualitative data that requires contextual understanding.

In Cognitive Load Theory terms, these are tasks with high germane load. They’re the tasks that build the expertise, judgment, and independent thinking that organisations ultimately depend on. Letting AI handle them entirely risks a gradual erosion of the human capabilities that make an organisation’s work genuinely valuable.

This is especially true for developing talent. If junior professionals are shielded from the struggle of working through complex problems independently, they may never build the schema and judgment that more experienced colleagues developed the hard way. Leaders need to consciously design learning opportunities that preserve productive struggle, even in an AI-enabled environment.

Getting the Direction Right
I want to be clear that this is not an anti-AI argument. I work with organisations on AI adoption every day, and I’m genuinely optimistic about what it makes possible.

But the research is now telling us something we can’t afford to ignore. How you use AI matters as much as whether you use it. I do believe that the organisations that will lead with AI in the coming years won’t be the ones that are automated the most. They’ll be the ones that were intentional about where they automated, where they collaborated, and where they protected the cognitive work that builds long-term capability.

The question isn’t whether your organisation should use AI. It should. The question is whether you’re using it in a way that makes your people stronger, or in a way that quietly makes them weaker.

That’s the thinking question. And it’s one every leader needs to answer deliberately.

Our values

Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity
Ambition
Creativity
Empathy
Authenticity