About the Project
Deep neural networks (DNNs) are limited in their capacity to safely assist scientific discovery and decision making until a particular pitfall is addressed. While DNNs succeed in exploiting non-linear patterns in very large and high-dimensional datasets, they catastrophically fail without warning under dataset shift, i.e., changes in data distribution.
This project, in collaboration with industry partner Max Kelsen, will study various ways to resolve this pitfall by characterising, detecting, and generalising against dataset shift. The research will theoretically unify sparse and inconsistent literature, and empirically validate that theory in the application of genomics, with results to inform ways to maximise reliability of learning systems under datasets shifts. It focuses on developing techniques and methodologies grounded in unsolved challenges in computational biology and multi-modal healthcare data. Max Kelsen has active research, development, and consulting activities in the fields of AI and cancer genomics, and has prioritised AI safety as a key ingredient of any new product prior to deployment.
About the Team
The project commenced in October 2021 with the recruitment of the Centre’s first PhD Researcher, Sam MacDonald, who is based at The University of Queensland. Sam is investigating the proposed methodologies on real datasets from different healthcare organisations, and is supervised by Chief Investigator Dr Fred Roosta-Khorasani from the School of Mathematics and Physics (UQ), Dr Quan Nguyen from the Institute for Molecular Bioscience (IMB) at UQ, and experts from the Max Kelsen team.
This project is one of two CIRES projects with Max Kelsen related to organisational and transformational aspects of data, algorithms, and AI. The second project Interpretable AI – Theory and Practice will commence in late April 2023.
project researchers
Dr Quan Nguyen
Sam MacDonald (PhD Researcher)

partner investigator
