Project

Interpretable AI-Theory and Practice

A major bottleneck for enterprises adopting AI is the difficulty in applying and interpreting the correct method for a given problem. This project will survey available interpretable methods in AI and communicate best practices in both lay and comprehensive terms, and explore new theoretical landscapes to extend and innovate interpretable methods in AI, focusing on both uncertainty (aleatoric and epistemic), and causality.

A major bottleneck for enterprises adopting AI is the difficulty in applying and interpreting the correct method for a given problem.  

This project will survey available interpretable methods in AI and communicate best practices in both lay and comprehensive terms and explore new theoretical landscapes to extend and innovate interpretable methods in AI, focusing on both uncertainty (aleatoric and epistemic), and causality. Emphasis will be on probabilistic inference, using graphical models, including both neural networks and more general approaches like neural wirings and directed acyclic graphs.  

The project will investigate the proposed methodologies on real datasets from different healthcare organisations. 

This project commenced in April 2023 with the recruitment of PhD researcher Eslam Zaher, who is based at The University of Queensland. Eslam is supervised by Chief Investigator Dr Fred Roosta-Khorasani, and Affiliate Investigators Dr Quan Nguyen, and Dr Maciej Trzaskowski.

The team collaborated with Max Kelsen, a Brisbane based artificial intelligence and software engineering agency, during early 2023. Max Kelsen was acquired by Bain & Company in 2023.


Project Team

Eslam Zaher

PhD Researcher

Prof. Fred Roosta-Khorasani

Chief Investigator

Dr Quan Nguyen

Affiliate Investigator

Dr Maciej Trzaskowski

Affiliate Investigator




View All projects