

On June 12, I moderated a discussion at the University of Wollongong in Dubai, opening with a deceptively simple question: What does it mean to be “AI literate” in 2025?
But before we even get there, we should ask: What do we mean by artificial intelligence (AI)? The term collapses a wide range of systems—from predictive models to generative agents—into a single catch-all, masking how differently each functions and governs its users.
As technology journalist Karen Hao observes, the word “AI” often obscures more than it reveals. She likens it to “transportation,” a word that sounds singular but refers to an entire ecosystem of modes, roads and rules. Likewise, AI isn’t a single technology but a sociotechnical system that carries information, mediates decisions and reshapes how we learn, work and relate to the world.
Which raises the deeper question: Where is that system taking us?
That same question is at the heart of current debates around talent development: How do we define and teach AI literacy in ways that prepare people not just to use these systems but to think critically about them—and alongside them?
With educators, industry leaders, strategic foresight specialists and students in the room, one thing became clear: We are moving too fast to confuse usage with understanding. And we may be doing a disservice to the very people we’re trying to empower.
Today’s AI tools can pass the Turing Test, generate websites and even outperform junior candidates in mock job interviews. In one Reddit experiment, users even unknowingly engaged with AI chatbots for days. Meanwhile, a recent study shows employers are rapidly adopting AI across industries—even as trust in AI tools among workers is declining. The UAE has responded boldly, making AI education mandatory in schools, but as a commenter put it, AI might help junior employees look good to their bosses, but it doesn’t teach them how their business works.
The risk isn’t just overreliance—it’s erosion.
Too often, AI is positioned as a shortcut, but learning requires friction and thinking takes time. In the rush to prepare students for a tech-driven job market, we risk stripping away the very skills that make them resilient, critical and creative. Are we teaching the next generation to think—or just to prompt? You have to understand the fundamentals before you can play with them—whether in jazz, architecture or writing. The same goes for AI—without strong cognitive foundations, it becomes a crutch, not a catalyst.
That’s why AI literacy alone isn’t enough. We need to design for what some call cognitive complementarity—educational and workplace experiences where AI enhances human insight rather than replacing it. This isn’t just philosophical; it’s strategic.
A recent Apple study, The Illusion of Thinking, challenges the hype by going beyond familiar benchmarks. Rather than rely on familiar benchmarks, the researchers challenged top AI models with unfamiliar, puzzle-based tasks to test their reasoning ability. The outcome? As complexity increased, accuracy dropped dramatically. Even when the models were given step-by-step algorithms, they struggled to apply them.
The takeaway: Today’s most powerful AI systems are exceptional at pattern matching, not true reasoning. One panelist put it succinctly: What remains innately human is the ability to learn, unlearn and relearn. If anything, the study underscores that we still have a long road to true artificial general intelligence. And in the meantime, the human mind remains our most adaptive asset.
This insight becomes even more urgent when considered against the economic context.
Globally, youth unemployment hovers around 13%. Entry-level jobs are often the first to be automated—and for many young people, that means being automated out of the workforce before they’ve even had the chance to develop real-world experience. If we don’t equip young people with the human capabilities machines can’t replicate—curiosity, judgment, adaptability—we risk widening the skills gap and deepening frustration on both sides of the hiring equation.
So, what can we do? First, we need to redefine what “future-ready” education looks like. It must go beyond digital familiarity to include systems thinking, ethical reasoning, and collaborative problem-solving. It also means cultivating soft skills—communication, emotional intelligence and the ability to work across cultures and disciplines.
Second, we need to teach students not just how to use tools but how to understand them. That includes building a mindset of lifelong learning and adaptability, especially as technologies change. Ethical responsibility must also become a core part of curricula: Not just what AI can do but what it should do.
Third, industry and academia must co-create experiences that expose students to real-world ambiguity, not just template outputs. When educators and employers work closely together, they are better prepared to navigate uncertainty and unpredictability.
Finally, we must resist the temptation to treat AI as an answer when it should be a provocation. AI will keep evolving, so must our thinking.
True resilience lies not in mastering tools but in deepening our relationship with meaning. If we get this right, we won’t just prepare students for the future; we’ll help them shape it.
The task ahead isn’t just about curriculum reform or better workplace tools. It’s about building ecosystems where human insight and machine capability evolve in tandem. That’s the future of talent development, and it’s one we must design deliberately before it designs us.