EU AI Act High-Risk Systems: Luxembourg Roadmap to August 2026
EU AI Act High-Risk Systems: The 84-Day Luxembourg Compliance Roadmap to August 2026
Learn more about AI implementation in Luxembourg in our comprehensive guide.
We are now exactly 12 weeks from 2 August 2026. That is the date the EU AI Act's obligations on general-purpose AI models bite, on the same day the high-risk system regime moves into a more enforced phase across the EU. Luxembourg's national authority (the CNPD for data-protection-adjacent matters, plus sector regulators including the CSSF and ITM, with overall AI Act coordination handled by the Ministry of the Economy in cooperation with ILNAS) has already signalled it expects deployers and providers of high-risk systems to be in a documented state of readiness by then.
If you're using AI in any capacity that touches the Annex III high-risk categories — recruitment, credit scoring, insurance pricing, biometric identification, critical infrastructure, education access, essential public services, or migration / asylum / border management — this post is the 84-day plan we walk Luxembourg clients through right now. It is not the legal text. It is the sequence in which work has to happen if you want to be ready by 2 August without burning Q3.
First: confirm whether your system actually is high-risk
Most of the urgent questions land on systems that aren't high-risk and don't realise it, or are and don't realise it.
Almost certainly high-risk under Annex III if your system:
- Filters, scores, ranks, or recommends candidates in recruitment
- Makes or materially supports decisions on creditworthiness or credit scoring of natural persons
- Sets risk-based pricing for life or health insurance
- Identifies, categorises, or verifies natural persons via biometrics
- Determines access to or assigns persons to educational institutions or programmes
- Is used by public authorities to evaluate eligibility for essential public services or benefits
- Manages or operates critical digital infrastructure, road traffic, or supply of utilities
Almost certainly not high-risk:
- Marketing copy generation
- Internal RAG over your own documents
- Code completion, summarisation, translation
- Customer-service chatbots that route to humans on any consequential decision
- Document processing where a human signs off the outcome
Genuine grey zones we see weekly in Luxembourg:
- HR systems that "just suggest" candidates (probably high-risk if they materially shape hiring outcomes — the "purely accessory" carve-out is narrow)
- Insurance underwriting copilots that offer pricing recommendations to a human underwriter (high-risk if the human is rubber-stamping; not high-risk if the human genuinely reviews)
- AI-assisted credit decisioning at fintechs (almost always high-risk regardless of how it's framed)
- Educational assessment AI used by Luxembourg lycées and Université du Luxembourg admissions teams (high-risk under Annex III paragraph 3)
If you're in a grey zone, resolve it on paper now. Have a lawyer or compliance officer write a one-page determination memo, dated, with reasoning. The worst outcome on 2 August is having to defend "we didn't know" to ITM, the CNPD, or the CSSF. The second-worst is having ten different people in the company with ten different mental models of whether a system is in scope.
The seven-pillar high-risk readiness file
For every system you've classified as high-risk, you need a file. Auditors and regulators will ask for it. Every Luxembourg client who has gone through a CNPD or CSSF thematic review in the last 18 months has been asked for some version of these seven pieces:
- Risk management documentation. Iterative, system-lifetime risk assessment. Not the GDPR DPIA; this is wider.
- Data and data-governance documentation. What trained the model, what data feeds it in production, where it was sourced, what bias testing was done, how data quality is maintained.
- Technical documentation. What the system is, how it was built, what it can and can't do. Minimum content list is in Annex IV of the Act.
- Logging. Automatic event logging across the system lifecycle, retained for an appropriate period.
- Transparency and instructions for use. What deployers and end-users need to know to use it safely.
- Human oversight. How a human can interpret the output, override it, and intervene. Tied directly to the supervision pattern from our pilot-to-production playbook.
- Accuracy, robustness, and cybersecurity. Documented performance benchmarks and resilience to adversarial input, manipulation, or drift.
That's the file. The 84-day plan below is how you produce it without disrupting Q3 operations.
The 84-day plan, week by week
This is what we walk clients through. The first 12 days are the most important — they catch the scope mistakes that cost months.
Weeks 1–2 (days 1–14): Inventory and classification
- List every AI system in production and every AI feature in tools you've procured (Microsoft 365 Copilot, Salesforce Einstein, your ATS's "AI matching", your finance platform's "AI insights")
- For each: deployer? provider? both?
- For each: high-risk under Annex III? grey zone? clearly out of scope?
- Output: a one-page register, signed by the executive responsible (CTO / COO / Head of Risk depending on org)
Weeks 3–4 (days 15–28): Pillar 1 + 2 — risk management and data governance
- Risk register per high-risk system
- Data lineage diagram per high-risk system
- Bias / fairness baseline established
- Output: pillars 1 and 2 of the file complete
Weeks 5–6 (days 29–42): Pillar 3 — technical documentation
- Annex IV-aligned technical doc per high-risk system
- If you procured the system, this is mostly the vendor's documentation plus your deployment-specific addendum
- Output: pillar 3 complete
Weeks 7–8 (days 43–56): Pillar 4 + 5 — logging and transparency
- Logging architecture audited; retention policy set
- User-facing transparency notices reviewed (FR / DE / EN; LU where appropriate — see multilingual workflows)
- Output: pillars 4 and 5 complete
Weeks 9–10 (days 57–70): Pillar 6 — human oversight
- Documented oversight role, training, and intervention path per system
- Tied to your Article 4 literacy programme
- Tabletop exercise: walk through an "override" scenario end-to-end
- Output: pillar 6 complete
Weeks 11–12 (days 71–84): Pillar 7 + final review
- Accuracy / robustness / cybersecurity testing documented
- Independent internal review of the file (separate from the team that built it)
- Sign-off by the executive responsible
- Output: full file ready for regulator request
The order matters. If you do it backwards (start with technical docs, end with classification), you produce three files for systems that turn out not to be in scope and miss two systems that are.
The CSSF / DORA wrinkle
If you're CSSF-supervised, the high-risk readiness work overlaps materially with DORA + EU AI Act compliance and Circular 22/806 outsourcing rules. There is real consolidation possible — the same supplier-due-diligence file, exit strategy, and incident-response playbook can serve all three regimes if you architect it that way. We've seen organisations duplicate this work 2–3 times because different functions own different regimes; the savings from consolidating are typically 30–40% of the total compliance hours.
What if you're a deployer, not a provider?
Most Luxembourg companies are deployers (you bought the system) rather than providers (you built it). Your obligations are lighter but real:
- Use the system in accordance with the provider's instructions
- Ensure human oversight is exercised by competent, trained natural persons
- Monitor operation and report serious incidents to the provider and to the supervisory authority
- Inform workers and worker representatives where the system is used in employment contexts
- Maintain logs (for the period the system is under your control)
The literacy obligation applies to deployers fully — see Article 4 / AI literacy. Deployer-side readiness work is roughly 40% of the provider-side burden but it's not zero, and the most common Luxembourg failure mode is assuming "we just bought it" means "we have nothing to do".
What good looks like on 2 August 2026
- A one-page system register, signed and dated, with classification per system
- A complete seven-pillar file for each high-risk system
- A literacy-training register covering everyone who interacts with these systems
- A named human-oversight officer per system with a one-page role description
- Tabletop exercise records demonstrating that an override has been walked through
- Vendor agreements updated where the AI Act has shifted obligations to the provider
- An incident-response playbook integrated with whatever you already run for GDPR and DORA
If you have all of that, you are not just compliant — you are auditable, which is the actual standard regulators care about.
Where 20 More fits in
We run an 84-day high-risk readiness sprint in Luxembourg. Inventory and classification on day 1, file complete on day 84. We work alongside your DPO, internal counsel, or external compliance support — we don't replace them, we sequence the operational work.
If you have one or more systems in or near Annex III and you're not yet in a documented state of readiness, book a high-risk readiness sprint. The 12-week window matters; starting in July means starting late.
Related reading:
- The EU AI Act August 2026 deadline: Luxembourg compliance
- EU AI Act Article 4 — AI literacy for Luxembourg companies
- DORA + EU AI Act: Luxembourg financial compliance in 2026
- GDPR-compliant AI for Luxembourg SMEs
- AI Pilot to Production: Luxembourg Scale-Up Playbook
- Multilingual AI workflows for Luxembourg businesses
- NIS2 + AI: Luxembourg cybersecurity compliance in 2026
- AI Knowledge Hub — 20 More Resources
Ready to Transform Your Business with AI?
Let's discuss how custom AI solutions can eliminate your biggest time drains and boost efficiency.
Related Resources
AI Implementation in Luxembourg
Explore our comprehensive guide to AI adoption, implementation, and governance in Luxembourg.
Read the GuideGet Expert Guidance
Discuss your AI implementation needs with our team and get a customized roadmap.
Schedule ConsultationRelated Posts
EU AI Act: Luxembourg's August 2026 Deadline
EU AI Act high-risk obligations land 2 August 2026. The Luxembourg checklist for SMEs, financial services, and regulated industries — fines hit €35M.
EU AI Act Article 4: AI Literacy for Luxembourg Companies
Article 4 of the EU AI Act applies to every Luxembourg employer using AI. The literacy obligation, the 90-day plan, and a free training checklist for SMEs.
DORA + EU AI Act: Luxembourg Finance Playbook
DORA is live, the EU AI Act bites 2 August 2026. How CSSF firms can sequence both regimes, share controls, and avoid paying the compliance bill twice.
