Bias Mitigation in Human in the Loop Decision Systems

About the Project

When humans label data to train AI models, their own biases and stereotypes may be reflected in the data, which consequently appear in the resulting trained models leading to unfair, biased, and non-transparent decisions. In collaboration with the Queensland Police Service (QPS), this project focuses on integrating fairness into learning algorithms used in the context of policing services and tasks and aims to observe if this leads to improved outcomes and experiences. The approach will include the development of human-in-the-loop AI, where humans help to increase transparency of automatic decision-making process, e.g., by generating natural language explanations on why a specific amount of police resources is required in a certain suburb.

 

About the Team

This project is due to commence in October 2022 with the commencement of a PhD researcher who will collaborate closely with leading experts in the Queensland Police Service (QPS) to generate more transparent, fair, and trustworthy decision-support systems driven by data and controlled by humans. They will develop novel bias tracking, management, and reduction method over the entire Artificial Intelligence pipeline: from data collection and curation to model training and deployment with end users. This project seeks to develop a strong and capable future leader who can undertake data analysis in a data-sparse environment, with the proposed model and research tasks able to be adapted and applied to other human-in-the-loop tasks. It is one of three projects with Queensland Police Service related to the responsible use of sensitive data assets. The remaining projects Data as a Service Architecture and Community Attitude to Law Enforcement Data are due to commence in 2023. 

project researchers
A/Prof Gianluca Demartini (Principal Advisor)
Prof Shazia Sadiq
Mr Nick Moss (Queensland Police Service)
partner investigator
Queensland Police Service