For Luxembourg law firms, in-house legal teams, and the wider professional-services ecosystem that depends on them, the question of whether the EU AI Act would be quietly re-timetabled has now been answered. On 28 April 2026 the second political trilogue on the European Commission's Digital Omnibus on AI — which would have pushed high-risk-system compliance from August 2026 to December 2027 — ended without agreement. Unless a deal lands before 2 August 2026, the original AI Act timetable, including its high-risk obligations, applies as written.
That changes the planning posture of every cabinet that has piloted AI tools over the last 18 months — and every CSSF-supervised firm whose AI use enters Annex III categories. The discount window on enforcement risk is, for now, closed.
What 2 August 2026 actually triggers
From that date, the operators of high-risk AI systems placed on the EU market must comply with the full chapter of obligations: a quality management system, technical documentation, conformity assessment, CE marking, EU database registration, post-market monitoring, and — material for legal practice — meaningful human oversight. The financial-sector application of high-risk obligations also kicks in on the same date.
For lawyers, the most relevant Annex III category is "AI systems intended to be used by a judicial authority or on its behalf to assist a judicial authority in researching and interpreting facts and the law". That captures certain document-analytics and judgment-prediction tools when used in judicial-support roles — including external-counsel deployments commissioned by courts or government clients. It does not, by itself, sweep in standard contract-review, eDiscovery, or back-office automation tools.
The harder cases are at the perimeter. AI used in access to essential services, creditworthiness assessment, employment decisions, or law-enforcement support all sit inside Annex III. Lawyers advising banks, insurers, employers, and government clients will be sending and reviewing AI-Act conformity packages alongside the contractual documentation that traditionally accompanies vendor selection.
How Luxembourg has wired the regulator side
The Grand Duchy's draft transposition vehicle, Bill 8476 — introduced into the Chamber of Deputies in December 2024 — designates a multi-headed competent-authority architecture rather than a single AI super-regulator:
- CSSF for AI systems connected to financial services.
- Commissariat aux Assurances (CAA) for insurance-related AI systems.
- Autorité luxembourgeoise indépendante de l'audiovisuel (ALIA) for transparency-related and audiovisual aspects.
- Commission nationale pour la protection des données (CNPD) for the data-protection interface, including biometric and emotion-recognition systems.
The architecture mirrors the country's sectoral supervisory map. For multi-jurisdictional firms, this means AI conformity files travel through familiar regulatory channels rather than a new clearing house — but it also means a single high-risk system may face overlapping inquiries.
The penalty grid
The AI Act's penalty regime sits at three tiers, calibrated to severity:
- Up to €35 million or 7% of worldwide annual turnover for breaches of the prohibited-practices catalogue (already in force since 2 February 2025).
- Up to €15 million or 3% of turnover for non-compliance with high-risk-system obligations and other operative provisions.
- Up to €7.5 million or 1% of turnover for the supply of incorrect or misleading information to authorities.
National authorities can apply lower fines for SMEs and start-ups, and Luxembourg's transposition is expected to take advantage of that discretion.
The cabinet checklist between now and August
Conversations with general counsel and managing partners across the Place suggest five concrete workstreams:
- AI inventory and Annex III mapping. Every system used internally and every system the firm advises clients to deploy needs to be classified — prohibited, high-risk, limited-risk, or minimal — and the rationale documented.
- Provider-vs-deployer analysis. Most cabinets are deployers. But firms developing in-house chatbots or workflow automations may cross into provider territory, with a heavier compliance burden.
- Human-oversight design. For high-risk uses, oversight is no longer best practice — it is a legal requirement. The control points need to be specified, trained for, and recorded.
- Conformity packages for client-deployed systems. Where a firm has bundled AI into a client-facing service offering, the conformity assessment, technical documentation, and registration steps fall on the firm.
- Vendor contracts and indemnities. Standard form orders no longer cut it. Cabinets are renegotiating allocation of conformity, monitoring, and incident-response obligations with their AI suppliers.
What the failed trilogue did not change
One useful point of clarity from the 28 April outcome: even if the Digital Omnibus is eventually adopted, the prohibited-practices regime remains in force from February 2025, the general-purpose AI obligations are live from August 2025, and the fines schedule is unchanged. Firms that read the Omnibus debate as a green light to slow down have, in retrospect, been working from an over-optimistic premise.
The realistic planning assumption for the rest of the second quarter and the summer is that the August 2026 calendar holds. Anything else is upside.

