A friend of mine recently told me he no longer works with ChatGPT. Not on principle, and not because he had found something better, but because his IT department had blocked access. A few days later I asked how he was getting along without AI. He started laughing. "What do you mean without? Everyone now works with Claude. They forgot to block that one." Since then, his entire team has been working with a chatbot his organisation doesn't even know is in use.
This is not an incident. This is how most organisations actually deal with AI right now. Whatever IT locks down, someone finds a way around. And as long as the boardroom believes that blocking is a synonym for being in control, the problem keeps growing under the radar.
TL;DR Shadow AI — the AI use that happens beyond the view of IT and the board — is now growing faster than most organisations can keep up with. Research shows that nearly half of employees use AI tools without permission, and that even top management lets it slide. Blocking no longer works, because new tools appear every week and existing software is quietly being given AI features. The only way out is not a ban, but an active policy: get AI literacy in order, provide good paid tools, and keep training your people because the field is moving fast.
What is shadow AI exactly?
Shadow AI is the same story as shadow IT, but several notches more intense. Employees use AI tools, browser extensions or AI features inside existing software without IT having any visibility, let alone control. The difference with the old shadow IT is that AI doesn't just read documents or write emails. AI processes, remembers, and increasingly takes action. And it does so on data you would rather not have leaked.
The numbers are confronting. Zscaler reported in mid-April that its customers collectively use more than 3,400 different AI applications at work. A four-fold increase in twelve months. Data flowing into those applications exceeded 18,000 terabytes in 2025. Research by BlackFog among 2,000 employees of companies with more than 500 staff shows that 86% use AI weekly at work and that 49% do so without their employer's permission. Most striking: 69% of senior executives and C-suite members in that survey are fine with it.
That last figure cuts to the core. Shadow AI is not a problem of stubborn employees. It is a problem of leadership that wants quick AI input itself and has no time to follow procedures.
Why blocking shadow AI doesn't work
I get the reflex. A tool causes risk, so you block it. That has worked for years across all kinds of technology. But AI is not like other software. AI enters your organisation through three channels at once.
The first route is public AI. ChatGPT, Claude, Gemini and dozens of others run straight in the browser. You can block them via your network, but then someone opens their phone, switches the laptop to 5G, or forwards a document to their personal email to push it through a chatbot at home. My friend's story shows how fragile that control is. ChatGPT blocked, Claude overlooked, end of story.
The second route is built-in AI features inside the software you already have. Microsoft, Google, Salesforce and almost every other SaaS vendor roll out new AI features every week inside tools that are formally sanctioned. JumpCloud research estimates that in 2026 about 70% of all workplace AI interactions take place via AI embedded in already-approved software. That is hardly distinguishable from regular use anymore.
The third route is agents and browser extensions. Someone installs a handy extension that summarises their email. That extension gets OAuth access to the entire mailbox. Someone else connects an AI assistant to their calendar, or to a shared team drive. The extension or agent then inherits the same rights as the employee, which is usually far more than anyone would knowingly authorise. Push Security recently published a detailed analysis of a Vercel breach in which exactly this kind of shadow integration was the entry door.
Add up these three routes and it becomes clear that blocking as a strategy has gone bankrupt. You are mopping with the tap wide open, and the tap is everywhere.
The real risk is no longer in a prompt
Many shadow AI discussions still revolve around what employees type into a chat window. Company data in a prompt, customer data in an AI tool, an internal presentation that accidentally ends up in an external model. That is a real risk, but by now it is the smallest part of the problem.
In recent years, the AI world has made a major leap towards agents. Tools that don't wait for a question, but take action on their own. Anthropic released Claude Opus 4.7 in mid-April, a model that according to the vendor can handle difficult coding tasks autonomously with minimal supervision. In the same period it launched Claude Managed Agents in public beta, a platform to run Claude as an autonomous agent inside your own workflows. And Claude Cowork became generally available for macOS and Windows, with a feature that lets Claude literally take over the mouse and keyboard on your computer to perform tasks while you are offline.
These are developments from the past four weeks. Four! What today is "a chatbot my IT department doesn't know about" can within a month be an autonomous agent working independently with your CRM, your mail and your customer database. The accountability question shifts completely. A bad prompt is an annoying leak. An agent acting on faulty assumptions can trigger a chain of decisions nobody consciously asked for. A refund being approved, a customer email being sent, a record being modified. All without a human in the loop.
That is precisely the kind of issue the boardroom blind spot (opent in nieuw venster) exposed earlier. Boardrooms tend to treat this as an operational detail. In reality it is a first-order governance question.
Three steps against shadow AI that actually work
Good news: the solution doesn't require a multi-million investment or a new department. It does require a shift in thinking. If you can't keep your people away from AI, you have to make sure they use it well. Three steps help.
1. Take AI literacy seriously, because it is a legal obligation
Since 2 February 2025, Article 4 of the European AI Act has required every organisation that uses or provides AI to ensure "a sufficient level of AI literacy" among its staff. This duty applies to every form of AI use, including the employee using ChatGPT to write an email. From August 2026, national supervisors will start enforcement. Fines run up to 7.5 million euros or 1.5% of global annual revenue, whichever is higher. What stands out: in almost every conversation I have with executives, this is still a blind spot. Everyone knows the AI Act in broad strokes, almost nobody knows the literacy obligation has been in force for over a year.
2. Give your people good paid tools
People look for AI tools because they need them for their work. If you don't provide them, they go searching themselves and end up on free versions you have no control over. The free version of an AI tool often uses your input for further model training. The business version usually does not. By making a small set of approved tools available, with proper enterprise data and logging settings, you solve three problems at once. You meet your duty of care, you keep the productivity gains, and you gain visibility into who uses what. It costs money, but far less than a data breach or an AI Act fine.
3. Keep training, because it changes weekly
This is perhaps the hardest point. AI literacy is not a one-off course you complete and tick off. The pace of change means a training from six months ago is already outdated. Look only at what happened in the last four weeks within one AI product. Anthropic released a new flagship model with better vision and autonomous reasoning. It launched Claude Design, a design tool that builds presentations and prototypes inside a chat window. The Excel and PowerPoint integrations got full context exchange between sessions. Memory from earlier chats became available to all users, even in the free tier. And an agent can now perform tasks on your computer from your phone while you are not even at that screen. This is one vendor, in one month. Anyone who trained their staff a year ago and thinks the job is done is now hopelessly behind.
From blocking to guiding
The deeper issue is this: organisations are used to managing risk by blocking things. IT, security and compliance functions are built for that. But AI doesn't behave like a traditional risk. It is not a virus or phishing email you can stop at the gate. It is a productivity accelerator that employees actively want to use, and that gains broader capabilities every month.
That demands a different posture. Not "ban it until proven otherwise", but "guide it so the risks remain manageable". This is not a soft choice; it is the only workable one. Executives who see it differently will lose to their own organisation. Not because their people defy them, but because they offer no other practical alternative.
I started this piece with an anecdote about a friend working with Claude because ChatGPT was blocked. The funny part is that his IT department probably thinks they have ticked the AI discussion off their list. In reality they have only relocated it. The real conversation about what AI means for the organisation, who is responsible for what, and how you keep control over a technology that mutates every month, has yet to begin.
That conversation does not start in IT. It starts in the boardroom.
Sources
- Zscaler on 3,400 AI apps and 18,000 TB of data flows, via UC Today (April 2026): uctoday.com (opent in nieuw venster)
- BlackFog research on shadow AI among 2,000 employees, via CIO.com: cio.com (opent in nieuw venster)
- JumpCloud on embedded AI in SaaS applications: jumpcloud.com (opent in nieuw venster)
- Push Security on the Vercel breach and OAuth sprawl: pushsecurity.com (opent in nieuw venster)
- Anthropic on Claude Opus 4.7: anthropic.com (opent in nieuw venster)
- Anthropic release notes (Cowork, Design, Excel/PowerPoint, Managed Agents updates): support.claude.com (opent in nieuw venster)
- EU AI Act, Article 4 (official text): artificialintelligenceact.eu (opent in nieuw venster)
- Compliquest on AI literacy, enforcement and fines under Article 4: compliquest.com (opent in nieuw venster)
- Berenschot on the AI Act and AI literacy in the Netherlands: berenschot.nl (opent in nieuw venster)
- Guldemond Advocaten on the Dutch enforcement deadline of August 2026: guldemondadvocaten.nl (opent in nieuw venster)
