Skip to content
AI & Governance · May 4, 2026 · 8 min read

The EU AI Act delay didn't happen. And that's not even your biggest problem.

What the failed trilogue of 28 April really means for executives, insurance intermediaries and entrepreneurs.

Illustration for article: The EU AI Act delay didn't happen. And that's not even your biggest problem.
Door Cédric Puisney – Eigen werk, CC BY-SA 3.0, Wikimedia Commons (opens in new window)

On 28 April, the postponement package for the AI Act — the Digital Omnibus — came up for negotiation for the third time. For twelve hours, the European Commission, Parliament and Council sat around the table with one question: does the AI Act shift by sixteen months? It came to nothing. No agreement. The next round is scheduled around 13 May. And until something is agreed, the original deadline of 2 August 2026 remains hard.

But this is not the real problem.

The real problem is that the majority of executives, insurance intermediaries and entrepreneurs don't even know this law exists. Let alone that they had heard there was a delay proposal on the table that collapsed last week.

This is not a detail. This is exactly where the difference lies between organisations that will be the preferred partner and organisations that will be scrambling in panic.


TL;DR The AI Act trilogue of 28 April failed over a difficult chapter on conformity assessments for AI in medical devices and machinery. Until an agreement is reached, 2 August 2026 remains the deadline. But the biggest risk is not in Brussels. It is in awareness. Most executives don't even know the law exists. Those who take stock now, set up governance and meet AI literacy requirements aren't building a checklist — they're building capability. And that difference will show up in which organisations get chosen.


What was on the table on 28 April

The AI Act was adopted in 2024. The law works in phases. High-risk AI systems must comply with stricter requirements from 2 August 2026. Risk management. Data quality. Transparency. Human oversight. Cybersecurity. For the insurance sector, this touches everything related to risk assessment, premium calculation, claims and fraud detection. That is not one peripheral system. That is the core.

The proposal in the Digital Omnibus was clear. High-risk obligations for standalone AI systems shift to 2 December 2027. For AI embedded in other regulated products, to 2 August 2028. That sounded like breathing room.

The sticking point on 28 April was Annex I. That is the chapter governing how AI in already-regulated products must be assessed. Think medical devices, industrial machinery and in-vitro diagnostics. The question: how does the AI Act relate to existing safety laws in those sectors? The parties couldn't agree. And without a solution there, no delay. You can read that back at thenextweb.com (opent in nieuw venster).

The Cypriot presidency of the EU Council wants the file wrapped up by 30 June. If that fails, it rolls over to the next presidency. And then you're in July 2026 — one month before the original deadline. One month.

The awareness gap is the real risk

A law can be as strict as it wants. If your target audience doesn't know the law exists, the law passes reality by. And in this case, it does.

Ask the average executive what a conformity assessment is. Silence. Ask whether they know they need CE marking for their fraud model. Silence. Ask whether they know Article 4 on AI literacy, which has been in force for more than a year and on which enforcement will start. Silence.

This is not a reproach. This is a systemic failure. The law was adopted in 2024. The Commission is now tinkering with a delay that dominates the agenda in specialist circles. And at the same time, the audience this is all about knows nothing.

What does that mean for you? If you are reading this, you are no longer in the group that doesn't know the law. You are in the minority that does.

Why waiting is the most expensive choice

The problem is not the deadline. The problem is what must happen before the deadline. And most people don't see that.

Conformity assessment. Three words that in practice quickly mean a year's work, depending on how many high-risk systems you touch. It is a process to prove in advance that your AI complies with the law. CE marking is the stamp you receive once that proof is in place. Two concepts that sound bureaucratic but in execution amount to a renovation.

Here is my point. Even if the delay goes through in May, the time pressure remains significant. If you want to comply by December 2027, you need to complete implementation by January 2027. That means starting the assessment in March 2026. That means right now, in May 2026, your governance, documentation and AI inventory need to be in order. If the delay does not come through, you have three months left. Not three years.

Waiting for Brussels is not a strategy. It is betting on something you have no influence over. And it is wrong twice over. First because the delay is not certain. Second because the work is too large to complete in the remaining time.

What conformity assessment and CE marking actually are

Here is the explanation I miss in most articles.

A conformity assessment is not a retrospective audit. It is a process in which you prove in advance that an AI system meets the requirements of the AI Act. Before it reaches the market or is put into use. For insurers and intermediaries, this means: for every AI system that influences creditworthiness, premiums, claims or access to insurance, you must be able to produce a file. How it was trained. On what data. With what test results. What risks you found. How humans oversee it.

CE marking is the visible proof that you have gone through this process. Just like on a children's toy or a medical device. For AI it is new. For regulators it will soon be the only way to see whether your system is ready to be used in assessing people.

A conformity assessment consists of four blocks:

  • Risk management. A continuous, documented process to identify, assess and reduce risks. Not once at the start. Ongoing, throughout the entire lifecycle of the model.
  • Data quality and bias testing. Evidence that your training data is representative, that you have tested for bias, and that errors in your data don't feed through into decisions that affect people. In insurance, that is not an abstract idea. That is the difference between a fair premium and a rejected application based on a postcode.
  • Transparency and logging. Log files of what the system does, decides and recommends. Documentation that can be explained to a regulator who is not a specialist.
  • Human oversight. A member of staff who understands the button. And who knows when not to press it.

Each block requires its own documentation, its own owner, its own audit trail. This is not a box-ticking exercise. This is a renovation.

And don't forget Article 4. The duty of AI literacy has been in force since 2 February 2025. Enforcement starts in August 2026, according to the Dutch Data Protection Authority (opent in nieuw venster). That is not a future problem. That is an existing default if you haven't done anything about it yet.

How to do it right: three tracks simultaneously

Don't wait for the May outcome. The original deadline remains your anchor. Three tracks simultaneously.

1. Inventory

Which AI systems are running? Which suppliers use AI in the chain? What is high-risk and what is not? A serious inventory takes a few months. And the picture always turns out to be bigger than anticipated. Supplier AI in particular hides well. In CRM, in telephony, in claims software, in premium calculators, in fraud detection. Things you were already using and didn't realise contained a model.

Many suppliers in this chain have no idea what a conformity assessment is. Not bad faith. Not ignorance. Simply never confronted with that question. Those conversations take months, not weeks. Start them now.

2. Governance

Who is responsible for which system? How are models approved before going live? Who sees incidents and who escalates? Don't reinvent this. Attach it to the risk and compliance structure you already have. That is where it needs to land anyway.

Per system: one owner for risk, one for data, one for human oversight. No committees. No working groups. Named owners.

3. Literacy

AI literacy is not an optional course and not a future concern. The legal duty under Article 4 has been in force for more than a year. The difference between organisations that invest seriously in this and organisations that tick an e-learning box will show up from August onwards in incidents and in audits.

And literacy is different from proficiency. Teaching people to work with ChatGPT is fine. Teaching people to spot when AI is wrong, to assess when an outcome doesn't add up, to know when to intervene — that is what the law wants. That takes more than a morning's training.

The difference between a checklist and capability

What you see in organisations that are moving now: they are not building a checklist, they are building capability. The difference is substantial.

A checklist is ticking boxes and hoping you never have to show it. Capability is being able to respond faster when regulators ask questions. Capability is having your own view on which suppliers make the cut. Capability is being agile when the legislation tightens further in 2027. Because it will.

And the commercial effect is even greater. An insurance intermediary that can demonstrate its AI foundation is solid will become the preferred partner of insurers who are themselves under pressure. An executive who can explain how their organisation stands will escape the panic that will sweep through the sector. Because that panic is coming. Just like with GDPR in 2018.

According to IAPP (opent in nieuw venster), the original deadline currently remains legally in force. Translate that. A regulator calling on 3 August 2026 cannot avoid the law. Whether your lawyer wins that argument before a judge is your call. I wouldn't count on it.

Three things I wouldn't spare anyone

Documentation is not a side issue. It is the work. The vast majority of a conformity assessment is producing evidence. No evidence, no marking. And reconstructing evidence after the fact is more expensive than recording it upfront. Much more expensive.

Suppliers are your blind spot. A lot of AI in the chain sits in third-party tools. Customer service bots, claims software, fraud models, premium calculators. If they cannot demonstrate compliance, neither can you. Start those conversations now. Not once the Omnibus finally goes through.

Awareness precedes compliance. As long as the board and management team don't know what the law requires, every implementation stays patchy. The AI literacy duty under Article 4 does not start at the bottom of the organisation. It starts at the top. I wrote about the boardroom dimension of this earlier in The Boardroom Blind Spot (opent in nieuw venster).

How to use the coming weeks

What would I do if I were starting tomorrow?

  • Week one. Map all AI applications. Your own systems and those of your suppliers. Make a list, categorise by risk, mark which ones are high-risk under the AI Act definitions.
  • Month one. Appoint owners, by name. No working group, no committee.
  • Quarter one. Run a first trial assessment on your most important system. Not to pass. To discover what is missing. The gaps you find tell you where you actually stand.
  • After that. Set up your evidence process. Logging, model documentation, test results, bias reports, incident registration. Make sure it is generated automatically, not reconstructed manually afterwards.

Whether the Omnibus finally goes through in May or not, it makes no difference to your planning. Waiting for Brussels is not a strategy. It is betting on something you have no influence over. And the organisations that don't gamble will have the lead you can no longer close.

The delay didn't go through. But the real advantage does not start at the deadline. It starts with knowing the law exists at all. That is you, if you have read this far. Most others haven't.


Sources