DataAdvanced 3 to 5 hours

Detect and Mitigate Model Bias

Audit a hiring algorithm for demographic bias and propose mitigation strategies.

The Scenario

A recruitment platform uses a machine learning model to score CVs and rank candidates. An internal audit reveals that the model consistently scores male candidates 12% higher than female candidates with identical qualifications. The board wants a full audit and remediation plan.

The Brief

Conduct a bias audit of the hypothetical model. Identify how bias could have entered the system, propose detection methods, and design mitigation strategies that do not simply remove protected attributes.

Deliverables

  • A taxonomy of how bias enters ML systems (training data, feature selection, label bias, feedback loops)
  • Detection methods: statistical tests and metrics you would use to measure demographic parity and equal opportunity
  • Mitigation strategies: at least 3 approaches (pre-processing, in-processing, post-processing) with trade-offs
  • An ethical framework for deciding how much accuracy you are willing to sacrifice for fairness

Submission Guidance

Simply removing the gender column does not fix bias — proxy features (name, school, hobbies) still leak. Show you understand this.

Submit Your Work

Your submission is graded against the rubric on the right. If you pass, you get a public Badge URL you can share on LinkedIn. There is no draft save, so work offline first and paste your finished response here.

This appears on your public Badge.

0/20000 charactersMarkdown supported

One per line or comma separated. Up to 5 links.

By submitting, you agree your submission text, name, and evaluation will appear on a public Badge URL.