News

Explainable Artificial Intelligence in education

Explainable Artificial Intelligence in education There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations […]

Explainable Artificial Intelligence in education

There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations and reasons for decisions AI systems make. This paper explores what is common and what is different between education and other broader uses of AI, why do we need XAI in education, how can XAI help current and future learners and educational systems, and what are the open research questios for XAI in Education. The paper is a collaboration between CIRES Chief Investigator Dr Hassan Khosravi, Centre Director Professor Shazia Sadiq, and colleagues at the University of Technology Sydney, Monash University, The University of British Columbia, and The University of Sydney. It presents a framework, referred to as XAI-ED, that considers six key aspects in relation to explainability for studying, designing and developing educational AI tools. These key aspects focus on the stakeholders, benefits, approaches for presenting explanations, widely used classes of AI models, human-centred designs of the AI interfaces and potential pitfalls of providing explanations within education. Access the full paper here.

Latest News

View All News