Look inside
Unexpected bias in machine learning models reduces accuracy, produces negative real-world consequences, and in the worst cases, entrenches existing inequalities for decades. Audits that can detect and mitigate this unwanted bias are an essential part of building and maintaining reliable machine learning systems.
In this liveProject, you’ll take on the role of a data scientist running a bias audit for the World Health Organization’s healthcare models. You’ll get to grips with state-of-the-art tools and best practices for bias detection and mitigation, including interpretability methods, the AIF-360 package, and options for imputing membership of a protected class. These tools and practices are placed in the context of a broad universe of bias where geopolitical awareness and deep understanding of the history and use of your particular data is key.
This project is designed for learning purposes and is not a complete, production-ready application or solution.
prerequisites
The liveProject is for intermediate Python programmers with experience in data science. To begin this liveProject you need to be familiar with:
TOOLS
- Basics of pandas
- Basics of scikit-learn
- Basics of Jupyter Notebook
TECHNIQUES
- Classification and regression using random forests and gradient boosting machines
you will learn
In this liveProject, you’ll learn the best practices for detecting and mitigating bias in your machine learning models. This essential skill is easily transferable to any machine learning project dealing with human data.
- Assess who will lose out from unmitigated bias and predict its real-world impacts
- Learn to quantify tradeoffs between model performance and unwanted bias
- Master SHAP for global and local model interpretability, shapely values, and feature importance plots
- Utilize AIF360 for debiasing metrics and debiasing methods such as reweighing
- Detect bias when a protected class is unobserved using data combination