Interpretable AI-Theory and Practice

About the Project

A major bottleneck for enterprises adopting AI is the difficulty in applying and interpreting the correct method for a given problem. This project will survey available interpretable methods in AI and communicate best practices in both lay and comprehensive terms, and explore new theoretical landscapes to extend and innovate interpretable methods in AI, focusing on both uncertainty (aleatoric and epistemic), and causality. Emphasis will be on probabilistic inference, in particular using graphical models, including both neural networks and more general approaches like neural wirings and directed acyclic graphs. In partnership with Max Kelsen P/L, the team will investigate the proposed methodologies on real datasets from different healthcare organizations.

This project is one of two CIRES projects with Max Kelsen related to organisational and transformational aspects of data, algorithms, and AI. The first project “Advancing Deep Neural Network Reliability During Dataset Shift” commenced in October 2021.


About the Team

This project commenced in April 2023 with the recruitment of PhD researcher Eslam Zaher, who is based at The University of Queensland. Eslam is supervised by Chief Investigator Dr Fred Roosta-Khorasani, Dr Quan Nguyen, and Dr Maciej Trzaskowski.

project researchers
Dr Fred Roosta-Khorasani (Principal Advisor)
Dr Maciej Trzaskowski
Dr Quan Nguyen
Mr Eslam Zaher (PhD researcher)
partner investigator
Max Kelsen