Bias Mitigation in Human in the Loop Decision Systems

About the Project When humans label data to train AI models, their own biases and stereotypes may be reflected in the data, which consequently appear in the resulting trained models leading to unfair, biased, and non-transparent decisions. In collaboration with the...

Advancing Deep Neural Network Reliability During Dataset Shift

About the Project Deep neural networks (DNNs) are limited in their capacity to safely assist scientific discovery and decision making until a particular pitfall is addressed. While DNNs succeed in exploiting non-linear patterns in very large and high-dimensional...

Interpretable AI-Theory and Practice

About the Project A major bottleneck for enterprises adopting AI is the difficulty in applying and interpreting the correct method for a given problem. This project will survey available interpretable methods in AI and communicate best practices in both lay and...