AI TRAINING
EU AI Act Compliance for Organisations
Leave with a clear action plan to meet EU AI Act obligations before enforcement deadlines hit.
What it covers
This training gives legal, compliance, and product teams a structured walkthrough of the EU AI Act: risk classification tiers, prohibited AI practices, obligations for high-risk systems, and the special rules applying to general-purpose AI and foundation models. Participants work through real case studies to classify their own systems and draft a compliance roadmap. The format combines expert-led instruction with facilitated group exercises, so teams leave with artefacts they can use immediately — not just slides.
What you'll be able to do
- Classify any AI system your organisation uses or develops into the correct EU AI Act risk tier with documented justification
- Identify which specific obligations apply to your organisation as a provider, deployer, or distributor under the Act
- Map your current AI portfolio against the prohibited practices list and flag systems requiring immediate action
- Produce a prioritised compliance gap analysis and draft a remediation roadmap aligned to enforcement deadlines
- Explain GPAI/foundation-model obligations and assess exposure when using third-party model APIs in your products
Topics covered
- EU AI Act structure, timeline, and entry-into-force milestones
- Four-tier risk classification: unacceptable, high, limited, and minimal risk
- Prohibited AI practices and enforcement from February 2025
- High-risk system obligations: conformity assessments, technical documentation, human oversight
- General-purpose AI (GPAI) and foundation model rules under Title VIII
- Roles and obligations: provider, deployer, importer, distributor
- Fines and enforcement mechanisms (up to €35M or 7% global turnover)
- Practical system classification exercise and compliance gap analysis
Delivery
Typically delivered as a full-day in-person or live-virtual workshop (6–7 hours) or split across two half-day sessions for larger organisations. A two-day variant (12–14 hours) adds deep-dives into technical documentation and conformity assessment procedures. Materials include an AI system inventory template, a risk-classification decision tree, and a compliance checklist aligned to the official EU AI Act annexes. Hands-on exercises constitute approximately 40% of contact time. Remote delivery uses Miro or equivalent for collaborative classification exercises.
What makes it work
- Involve product, legal, and data engineering in the same room during classification exercises — cross-functional alignment is the single biggest predictor of a workable roadmap
- Anchor compliance work to existing GDPR and ISO 27001 processes rather than building a parallel framework from scratch
- Assign a named AI Act owner or working group within 30 days of training to maintain momentum
- Re-run the classification exercise annually or whenever a significant new AI system is deployed or updated
Common mistakes
- Assuming GDPR compliance automatically covers EU AI Act obligations — the two frameworks have significant gaps and different documentation requirements
- Classifying all internal AI tools as minimal-risk without a formal assessment, missing high-risk use cases in HR or credit scoring
- Waiting for national supervisory authorities to be appointed before starting compliance work, losing 12–18 months of preparation time
- Overlooking deployer obligations when using third-party AI APIs, assuming the provider bears all responsibility
When NOT to take this
This workshop is not the right fit for a team that has already completed a formal EU AI Act gap analysis and is now in implementation mode — they need hands-on technical documentation support or legal advisory retainers, not an awareness-level classification workshop.
Providers to consider
Sources
This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.