AI TRAINING
Bias & Fairness in ML Systems
Equip your teams to identify, measure, and mitigate bias in machine learning systems before deployment.
What it covers
This practitioner-level programme covers the mathematical definitions of fairness, practical techniques for detecting bias across protected attributes, and mitigation strategies at every stage of the ML pipeline. Participants work through real-world case studies—credit scoring, hiring algorithms, and content recommendation—applying tools like Fairlearn, IBM AI Fairness 360, and SHAP to audit models. The course also covers trade-off analysis between fairness metrics and predictive accuracy, documentation practices, and regulatory obligations under the EU AI Act. Delivered as a blended programme with instructor-led sessions and hands-on labs, it is designed for cross-functional teams including engineers, product managers, and legal/compliance stakeholders.
What you'll be able to do
- Select and justify an appropriate fairness metric given a specific business context and protected attribute
- Run a bias audit on a trained classifier using Fairlearn or IBM AIF360 and interpret the results
- Apply at least two mitigation techniques (e.g., reweighing, adversarial debiasing, threshold adjustment) and quantify their accuracy trade-offs
- Produce a model card and fairness audit report that meets EU AI Act documentation requirements for high-risk systems
- Design a cross-functional fairness review process including legal, product, and engineering sign-off criteria
Topics covered
- Definitions of algorithmic fairness: demographic parity, equalized odds, individual fairness
- Sources of bias: data collection, labelling, proxy variables, feedback loops
- Bias measurement using Fairlearn, IBM AI Fairness 360, and SHAP
- Pre-processing, in-processing, and post-processing mitigation techniques
- Accuracy-fairness trade-off analysis and decision frameworks
- Regulatory context: EU AI Act high-risk AI systems, anti-discrimination law
- Model cards, fairness datasheets, and audit documentation
- Organizational governance: fairness review boards and red-team audits
Delivery
Delivered as a blended programme over 3–4 days: approximately 40% instructor-led concept sessions (in-person or live virtual) and 60% hands-on Python lab work. Participants receive a pre-configured Jupyter environment with real anonymised datasets. A capstone audit exercise on a provided model is included on the final day. Remote delivery is fully supported via Zoom/Teams breakout rooms; in-person is recommended for cross-functional cohorts to encourage role-based dialogue. All materials, slide decks, and lab notebooks are provided and retained by participants.
What makes it work
- Include legal, product, and engineering stakeholders in the same cohort to build shared vocabulary and accountability
- Anchor labs to the organisation's own use cases or models, even in anonymised form, to maximise relevance
- Establish a post-training fairness review process with clear ownership before the programme ends
- Revisit bias audits at each major model retrain cycle, not only at initial deployment
Common mistakes
- Treating fairness as a single metric—teams pick one definition (e.g., demographic parity) without understanding its implications for other groups or business objectives
- Addressing bias only at model training time, ignoring upstream data collection and downstream deployment feedback loops
- Involving legal and compliance teams too late, after technical choices have already locked in fairness trade-offs
- Documenting fairness efforts superficially (checkbox model cards) rather than as living audit artefacts tied to model versions
When NOT to take this
If an organisation has no ML models in production and is still in early data-infrastructure build-out, this training is premature—invest first in data quality and basic ML literacy before tackling fairness auditing at this depth.
Providers to consider
Sources
This training is part of a Data & AI catalog built for leaders serious about execution. Take the free diagnostic to see which trainings your team needs.