
AI Literacy is the Real Barrier to Adoption
.jpg)

There’s a lot of excitement about AI.
You can see it online, on virtually every other post. Every executive team I speak to is talking about it. It’s on meeting agendas, it’s being explored in strategy sessions, and it’s dominating the headlines both in print and online.
But here’s the uncomfortable truth: most organisations aren’t actually doing anything meaningful with it.
They’re not experimenting. They’re not deploying tools in any real way. And they’re not ready to.
Why? I don’t think it’s a tech problem. It’s a literacy problem.
And that’s what is holding us back in adoption.
To be clear, I don’t think executives need to become prompt engineers or machine learning experts. That’s well beyond their role.
But I do think they need to understand what AI is and isn’t capable of, where it adds value, what the risks are, and how it might change their business.
Right now, that knowledge is missing. And it shows up when businesses make hasty decisions in regards to their business and become “AI-First”, without proper research.
Let’s look at this study from Orgvue as an example. Their research shows that in 2024, 39% of business leaders made employees redundant as a result of deploying AI. Of those, 55% admit they made wrong decisions about those redundancies.
They go on to comment that “businesses are learning the hard way that replacing people with AI without fully understanding the impact on their workforce can go badly wrong… Some leaders are waking up to the fact that partnership between people and machines requires an intentional upskilling program if they're to see the productivity gains that AI promises.”
This back-and-forth does little to reassure workers to trust in AI. This is particularly true in Australia, where only 36% are willing to trust AI and 78% are concerned about negative outcomes from AI usage.
In heavily regulated sectors like government, for example, this lack of AI understanding almost always defaults to one thing: risk avoidance. Leaders are afraid of making the wrong move, so instead of moving forward incrementally, they freeze.
In our work with government agencies, we see this play out often. Leaders are curious about AI, and the appetite is growing, but without foundational literacy, the response is usually cautious to the point of inaction.
This fear-based approach results in two major issues:
Over time, this creates a culture of stagnation. People wait for perfect clarity or perfect conditions, like a policy or a regulation, before taking action. But that clarity rarely arrives. The technology evolves too fast.
The benchmarks shift.
And meanwhile, the capability gap widens.
At The Strategy Group, we help governments and other complex organisations move past this paralysis. That starts with building AI literacy at every level so that risk can be understood and managed with opportunity. When people feel confident, they’re far more willing to experiment. And when experimentation is encouraged, meaningful progress becomes possible.