In this liveProject, you’ll use the open source AI Fairness 360 toolkit from IBM and the “equalized odds post-processing” method to post-process your model for the purpose of bias mitigation. “Equalized odds post-processing” seeks to mitigate bias by making modifications to the prediction label after a model has been trained. After charting bias metrics on a basic classifier, you’ll tune the classification threshold to explore the impact on revealed biases.
This project is designed for learning purposes and is not a complete, production-ready application or solution.
When you start your liveProject, you get full access to the following books for 90 days.
Adriano Koshiyama is a Research Fellow in Computer Science at University College London, and the co-founder of Holistic AI, a start-up focused on providing advisory, auditing and assurance of AI systems. He helps manage TheAlgo Conferences (thealgo.co) and is the main investigator at UCL Algorithm Standards and Technology Lab. Academically, he has published more than 30 papers in international conferences and journals.
Catherine Inness is a senior manager in Accenture?s Data Science practice in the UK. She has more than ten years experience in technology, focused on the public sector. She holds an MSc in data science and machine learning from University College London, where her research focused on algorithm fairness.
Umar Mohammed is the lead developer at Holistic AI. He has over 10 years of experience writing software and is currently focused on creating debiasing tools for AI systems. He holds an MSc in vision imaging and virtual environments from University College London where his research focused on face recognition. He has published papers on computer vision in international conferences.
Emre Kazim is a research fellow in the computer science department of University College London, working in the field of AI ethics. His current focus is on governance, policy and auditing of AI systems, including algorithm interpretability and certification. Emre has a PhD in philosophy.
The liveProject is for beginner data scientists and software engineers looking to tackle the basic principles of measuring and mitigating ML bias. To begin this liveProject, you will need to be familiar with:
Basic Jupyter Notebook
Basic machine learning
you will learn
In this liveProject, you’ll learn to assess your machine learning model for bias and identify any patterns that may be unfairly prejudiced against protected characteristics.
Setting up a Google Colab environment to run Python code in a Jupyter notebook.
Loading a pickled dataset into Google Colab.
Using scikit-learn to train a logistic regression classifier and compute performance metrics.
Using the seaborn library to produce charts visualizing the metrics.
Gain familiarity with AIF360 open-source library from IBM.
Mitigate bias using AIF360 to post-process the model using the “equalized odds” method.
You choose the schedule and decide how much time to invest as you build your project.
Each project is divided into several achievable steps.
While within the liveProject platform, get help from other participants and our expert mentors.
Compare with others
For each step, compare your deliverable to the solutions by the author and other participants.
Get full access to select books for 90 days. Permanent access to excerpts from Manning products are also included, as well as references to other resources.
I definitely learned some useful tools, such as AIF360. I will definitely be using it in the near future.