Quel est le niveau de maturité de votre organisation Data & IA ?Faites le diagnostic
Toutes les formations

FORMATION IA

Biais et équité dans les systèmes de ML

Donnez à vos équipes les outils pour détecter, mesurer et corriger les biais dans vos systèmes de machine learning avant leur déploiement.

Format
programme
Durée
16–24h
Niveau
practitioner
Taille de groupe
6–20
Prix / participant
€2K–€4K
Prix groupe
€12K–€28K
Public
ML engineers, data scientists, product managers, and legal/compliance professionals involved in AI system design or oversight
Prérequis
Working knowledge of supervised ML concepts and Python; familiarity with model evaluation metrics (precision, recall, AUC)

Ce qu'elle couvre

Ce programme de niveau praticien couvre les définitions mathématiques de l'équité, les techniques pratiques de détection des biais sur les attributs protégés, et les stratégies de mitigation à chaque étape du pipeline ML. Les participants travaillent sur des études de cas réels — scoring de crédit, algorithmes de recrutement, recommandation de contenu — en appliquant des outils tels que Fairlearn, IBM AI Fairness 360 et SHAP pour auditer des modèles. Le cours aborde également l'analyse des compromis entre métriques d'équité et performance prédictive, les pratiques de documentation et les obligations réglementaires issues de l'AI Act européen. Il est conçu pour des équipes pluridisciplinaires incluant ingénieurs, chefs de produit et parties prenantes juridiques ou conformité.

À l'issue, vous saurez

  • Select and justify an appropriate fairness metric given a specific business context and protected attribute
  • Run a bias audit on a trained classifier using Fairlearn or IBM AIF360 and interpret the results
  • Apply at least two mitigation techniques (e.g., reweighing, adversarial debiasing, threshold adjustment) and quantify their accuracy trade-offs
  • Produce a model card and fairness audit report that meets EU AI Act documentation requirements for high-risk systems
  • Design a cross-functional fairness review process including legal, product, and engineering sign-off criteria

Sujets abordés

  • Definitions of algorithmic fairness: demographic parity, equalized odds, individual fairness
  • Sources of bias: data collection, labelling, proxy variables, feedback loops
  • Bias measurement using Fairlearn, IBM AI Fairness 360, and SHAP
  • Pre-processing, in-processing, and post-processing mitigation techniques
  • Accuracy-fairness trade-off analysis and decision frameworks
  • Regulatory context: EU AI Act high-risk AI systems, anti-discrimination law
  • Model cards, fairness datasheets, and audit documentation
  • Organizational governance: fairness review boards and red-team audits

Modalité

Delivered as a blended programme over 3–4 days: approximately 40% instructor-led concept sessions (in-person or live virtual) and 60% hands-on Python lab work. Participants receive a pre-configured Jupyter environment with real anonymised datasets. A capstone audit exercise on a provided model is included on the final day. Remote delivery is fully supported via Zoom/Teams breakout rooms; in-person is recommended for cross-functional cohorts to encourage role-based dialogue. All materials, slide decks, and lab notebooks are provided and retained by participants.

Ce qui fait que ça marche

  • Include legal, product, and engineering stakeholders in the same cohort to build shared vocabulary and accountability
  • Anchor labs to the organisation's own use cases or models, even in anonymised form, to maximise relevance
  • Establish a post-training fairness review process with clear ownership before the programme ends
  • Revisit bias audits at each major model retrain cycle, not only at initial deployment

Erreurs fréquentes

  • Treating fairness as a single metric—teams pick one definition (e.g., demographic parity) without understanding its implications for other groups or business objectives
  • Addressing bias only at model training time, ignoring upstream data collection and downstream deployment feedback loops
  • Involving legal and compliance teams too late, after technical choices have already locked in fairness trade-offs
  • Documenting fairness efforts superficially (checkbox model cards) rather than as living audit artefacts tied to model versions

Quand NE PAS suivre cette formation

If an organisation has no ML models in production and is still in early data-infrastructure build-out, this training is premature—invest first in data quality and basic ML literacy before tackling fairness auditing at this depth.

Fournisseurs à considérer

Sources

Cette formation fait partie d'un catalogue Data & IA construit pour les leaders sérieux sur l'exécution. Lancez le diagnostic gratuit pour voir quelles formations sont prioritaires pour votre équipe.