About the Project When humans label data to train AI models, their own biases and stereotypes may be reflected in the data, which consequently appear in the resulting trained models leading to unfair, biased, and non-transparent decisions. In collaboration with the...
About the Project In collaboration with Queensland Health, this project aims to provide a platform-independent decision support framework using an interpretable machine learning approach for making effective risk predictions for paediatric patients at risk of...
About the Project The traditional hospital-focused model of care neglects monitoring and treating diseases at home. A number of intelligent monitoring systems exist for clinical abnormalities prediction for patients who are confined to hospital beds, but few attempts...
About the Project Deep neural networks (DNNs) are limited in their capacity to safely assist scientific discovery and decision making until a particular pitfall is addressed. While DNNs succeed in exploiting non-linear patterns in very large and high-dimensional...
About the Project A major bottleneck for enterprises adopting AI is the difficulty in applying and interpreting the correct method for a given problem. This project will survey available interpretable methods in AI and communicate best practices in both lay and...