Let me be honest. I have been living in the 'AI bubble' for a while now. You know that place where generative AI (GenAI) is the conversation of the day. Where we experiment daily with ChatGPT, Gemini, Claude, and other tools that can increase team productivity and support decision-making. In my bubble, I see the potential: it can radically improve work processes, reduce operational costs, and accelerate strategic insights.
The AI world is moving at breakneck speed. ChatGPT achieved in just two months what the internet took seven years to accomplish: 100 million users. In my bubble, it seems logical that every management team should have this as a top priority on the strategic agenda, doesn't it?
To my amazement, this bubble view proves to be completely at odds with many companies. When I talk to entrepreneurs outside my bubble, I hear: "AI? Yes, we have a working group for that..." or "We're waiting to see if it proves itself in our sector." It feels like I'm talking about a revolutionary production line that could determine competitive position, while other management teams are still debating replacing fax machines.
This underestimation is not new. We saw exactly the same pattern with the rise of the internet, the personal computer and the mobile phone. In 1995, even a senior manager of a technology company said that "the internet is a fad that will pass". Many executives in the early 1980s saw the PC merely as a "fun gadget for a few departments". And smartphones? They were initially dismissed as "BlackBerries for consumers".
In all cases, this underestimation caused established companies to lose their market position to newcomers who did recognise the transformative power of these technologies. History is repeating itself with AI, but the difference is that the adoption curve is much steeper and the consequences of being late may be even more profound.
And that, dear reader, is the core of the "Generative AI Paradox": there is unprecedented strategic potential, but it is widely underestimated, or not seen at all and only limitedly implemented. This gap between what is possible and what is actually happening in organisations — that is what we really need to talk about.
My bubble: Why I am so surprised
In my bubble we see AI as a "strategic game-changer" — a technology that can not only reduce operational costs, but also accelerate decision-making and drive innovation.
Although I can cite all sorts of impressive figures about ROI and achieved cost savings, recent research makes it clear that AI implementation at most companies remains at surface level. These are often isolated pilot projects rather than the organisation-wide, transformative integration that management should have in mind.
In my bubble, I am constantly occupied with the exponential growth of AI-driven market share. We see how forward-thinking companies are increasing their competitiveness with deep research, Sora for video creation, and developing new business models. Entrepreneurs outside the bubble struggle with this. Their strategic planning is accustomed to linear growth — step-by-step development. But AI transformation follows an exponential curve: slow at first, then suddenly at breakneck speed. This "exponential growth bias" causes many entrepreneurs and board members to systematically underestimate how quickly AI adoption can disrupt the market.
This partly explains my surprise: what feels like an urgent strategic priority to me is seen by other entrepreneurs as a 'nice-to-have' for the medium term.
The reality outside the bubble: Why entrepreneurs struggle with AI implementation
It is not one thing, but a mix of factors causing this gap between strategic potential and actual implementation:
The execution gap
Most entrepreneurs have heard of GenAI, but strategic depth is often lacking. Management boards simply do not know how to translate AI technology into concrete business cases. AI literacy at C-level and in SMEs (small and medium-sized enterprises) is desperately needed, just as financial or market knowledge is.
What shocks me: entrepreneurs and C-suite executives often have no idea how much their middle management is already experimenting with AI. They drastically underestimate the usage and thereby hold back investments in scaling up. Innovation coming from below is stalled by a lack of strategic sponsorship from above.
There is also a generational dynamic in management. Digitally native leaders adopt AI more quickly as a strategic weapon, while experienced executives are sometimes more sceptical or underestimate the impact on existing business models. They have often not concretely seen how AI can address their specific strategic challenges.
Lack of strategic vision
Many entrepreneurs lack a clear AI strategy linked to concrete business goals and KPIs. Without a clear transformation plan, AI often gets stuck in small, isolated pilot projects. Management teams do not know well why and where to deploy AI for maximum value creation.
They wait and see what competitors do, which leads to "strategic conservatism". Companies that do lead have a clear vision linked to business value. The rest struggle with these 'foundational issues'.
Fear and resistance
This is perhaps the most human factor. Employees are afraid that AI will take their jobs. Nearly a third fear that AI will curtail or replace their role. Some even admit to sabotaging their company's AI strategy — out of fear or because the tools do not work well.
The well-known motto "AI will not replace you, but someone using AI might" is motivating for some, but by no means reassuring for others.
The speed of change also causes stress. Nearly two-thirds of employees feel stressed because their role is evolving so rapidly. This leads to "AI anxiety" and risks to mental wellbeing.
For mid-career professionals, there is the additional challenge of unlearning old routines. The psychological barrier to starting over is high.
Cognitive biases
In addition to the "exponential growth bias", the "status quo bias" also plays a role: the tendency to keep doing things the way they have always been done. AI is complex, the benefits are sometimes only visible later, and there is not always an acute need to change.
Unrealistic expectations contribute to this: the idea that AI solves all problems directly and autonomously leads to disappointment when that does not happen.
Distrust also plays a role. Employees are concerned about inaccuracy and security risks. Societally, there is little trust in technology companies and governments to regulate AI properly.
The stakes are high: What does this gap mean?
The labour market is being disrupted
AI will fundamentally change the way we work over the next 5 to 10 years. Routine tasks will disappear first: data entry, administrative work, customer service.
Roles are shifting and transforming profoundly. Marketers use AI for analysis, doctors for diagnostics, project managers for planning. It is estimated that 70% of current jobs will have materially changed by 2030.
Employees lose tasks to AI, but that can free up time for more interesting work. At the same time, new roles are emerging: AI specialists, data experts, cybersecurity professionals, but also AI supervisors and data ethicists.
The impact on income inequality is uncertain. Some studies suggest that lower-skilled workers can actually achieve greater productivity gains from AI, while others say that the highly skilled will be able to extend their lead further.
Those who do not take action now risk falling behind while competitors advance.
Education must keep pace
Our traditional education system, focused on rote learning, is no longer sufficient. Knowledge is available everywhere with AI.
The focus must shift to effectively collaborating with AI systems. Students need to learn how to use AI tools and critically evaluate the output.
Education must focus on what AI cannot (yet) do well: critical thinking, creativity, complex problem-solving, ethics, and social skills.
Lifelong learning is becoming essential. The 'half-life' of technical skills is getting shorter. The ability to learn new things quickly is becoming more important than ever.
Crucial challenge: access to these new forms of learning must not lead to new inequality (opent in nieuw venster). People with fewer digital skills or resources risk falling behind.
Society is changing too
There is a real risk that AI will widen the gap between "AI haves" and "have-nots". Wealthier countries, sectors and individuals benefit more quickly. A "grey digital divide" threatens if older professionals have fewer opportunities.
AI anxiety, stress from rapid changes and the pressure to continuously keep learning can lead to burnout. Chronic uncertainty can lead to anxiety disorders. We must be careful that AI does not lead to 'hyperproductivism'.
If AI takes over the 'easy' work, the more complex or human-centred work may remain. This can make work more interesting, but also more emotionally demanding. It can also undermine professional identity if someone no longer feels 'needed'.
Although AI can alleviate tasks, a large proportion of employees report that AI has actually increased their workload.
Towards a better future: How do we bridge the gap?
The situation may sound bleak, but there is hope. It is possible to move towards a "best-case scenario", in which we harness the enormous potential of AI without inequality peaking. This requires action from everyone:
For you
- Embrace a growth mindset and lifelong learning. Believe that you can learn new things, regardless of your age. Be proactive.
- Develop a balanced skill set: technical AI skills and human skills such as critical thinking, creativity and empathy.
- Integrate AI into your current role. Actively experiment with tools — this reduces anxiety and gives you an advantage.
- Be open to change and be willing to let go of old routines.
- Seek mentors and peers. Learn from younger colleagues how to work with tools, and share your experience with them (reverse mentoring).
For businesses
- Develop a clear AI vision and strategy linked to business goals. AI must be integral, not a standalone IT project.
- Invest heavily in training and reskilling. Reserve substantial budget for this.
- AI is intended as a complement, not a replacement. Emphasise that you are using AI to strengthen employees.
- Create new job descriptions in which collaboration with AI is explicitly included.
- Establish ethical AI frameworks and involve employees in implementation.
- Reinvest freed-up productivity partly for the benefit of employees.
For educational institutions and government
- Renew curricula. Integrate AI into all subjects and focus on AI literacy.
- Teach students higher-order thinking and collaboration with AI.
- Build an infrastructure for lifelong learning.
- Address inequality through taxes on AI profits or support for affected regions.
- Facilitate societal discussion about AI and work.
Conclusion: Bridge the gap, build the future
Looking back from my AI bubble, I now realise that my surprise at the lack of AI adoption stems from a lack of understanding of the complexity and fears that exist outside the bubble. It is not just ignorance, but also lack of direction, deep fears and our natural tendency to underestimate large, rapid changes.
The "Generative AI Paradox" — the gap between enormous potential and widespread underestimation — is real, but bridgeable. Bridging this gap is not only crucial for economic growth, but also for social stability and individual wellbeing.
If we now invest in people, anticipate change, and safeguard equality, we can in the coming years allow the transformation of AI to lead to a future that is not only more productive, but also more humane. A future in which AI and humans work as partners: the AI as assistant, the human as creative and empathetic director.
Let us stop waiting. Let us step out of our comfort zones (or bubbles), inform ourselves, experiment and start the conversation. Only together can we ensure that the promise of AI is fulfilled — not at the expense of, but for the benefit of us all.
