Emre Kazim

Emre Kazim is a research fellow in the computer science department of University College London, working in the field of AI ethics. His current focus is on governance, policy and auditing of AI systems, including algorithm interpretability and certification. Emre has a PhD in philosophy.

projects by Emre Kazim

Mitigate Machine Learning Bias in Mortgage Lending Data

4 weeks · 5-9 hours per week average · INTERMEDIATE

In this series of liveProjects, you’ll apply techniques for measuring and mitigating bias in a machine learning algorithm. You’ll step into the role of a data scientist for a bank, and investigate the potential biases that arise when automated decision-making is applied to your company’s mortgage offers—in particular, whether your algorithm is biased by gender. Each project in this series covers a different aspect of fairness measurement and intervention, including exploring a dataset with a focus on fairness, and mitigating bias in a logistic regression model.

Mitigating Bias with Postprocessing

1 week · 5-9 hours per week · INTERMEDIATE

In this liveProject, you’ll use the open source AI Fairness 360 toolkit from IBM and the “equalized odds post-processing” method to post-process your model for the purpose of bias mitigation. “Equalized odds post-processing” seeks to mitigate bias by making modifications to the prediction label after a model has been trained. After charting bias metrics on a basic classifier, you’ll tune the classification threshold to explore the impact on revealed biases.

Mitigating Bias with Preprocessing

1 week · 5-9 hours per week · INTERMEDIATE

In this liveProject, you’ll use the open-source AI Fairness 360 toolkit from IBM to measure and mitigate bias with model preprocessing. You will chart bias metrics on a basic classifier, before preprocessing your training data with the “reweighing” method. Reweighing seeks to mitigate bias by making modifications on the training data by computing and applying a set of weights. The weights are calculated such that the training data, with weights applied, is free of discrimination with respect to a protected attribute. Once your model is preprocessed, you’ll construct a classifier that makes use of the data.

Measuring Bias in a Model

1 week · 5-9 hours per week · INTERMEDIATE

In this liveProject, you’ll investigate and report on whether your company’s mortgage application classifier is making fair decisions between male and female applicants. You’ll train a logistic regression classifier on the HMDA dataset, compute performance metrics with a logistic regression classifier, and then chart “equality of opportunity” bias metrics.

Measuring Bias in a Dataset

1 week · 5-9 hours per week · INTERMEDIATE

In this liveProject, you’ll compute and chart metrics that show the key characteristics in Home Mortgage Disclosure Act (HMDA) dataset, and investigate relationships between key demographics and other features that may lead to a biased machine learning model. You’ll compute and chart “equality of outcome” bias metrics, and produce a report into your insights.