Explainable Artificial Intelligence in education

Khosravi H., Shum S.B., Chen G., Conati C., Gasevic D., Kay J., Knight S., Martinez-Maldonado R., Sadiq S. & Tsai Y.-S., Explainable Artificial Intelligence in education, Computers and Education: Artificial Intelligence (2022), https://doi.org/10.1016/j.caeai.2022.100074.

Abstract: There are emerging concerns about the Fairness, Accountability, Transparency, and Ethics (FATE) of educational interventions supported by the use of Artificial Intelligence (AI) algorithms. One of the emerging methods for increasing trust in AI systems is to use eXplainable AI (XAI), which promotes the use of methods that produce transparent explanations and reasons for decisions AI systems make. Considering the existing literature on XAI, this paper argues that XAI in education has commonalities with the broader use of AI but also has distinctive needs. Accordingly, we first present a framework, referred to as XAI-ED, that considers six key aspects in relation to explainability for studying, designing and developing educational AI tools. These key aspects focus on the stakeholders, benefits, approaches for presenting explanations, widely used classes of AI models, human-centred designs of the AI interfaces and potential pitfalls of providing explanations within education. We then present four comprehensive case studies that illustrate the application of XAI-ED in four different educational AI tools. The paper concludes by discussing opportunities, challenges and future research needs for the effective incorporation of XAI in education.


Information Resilience: the nexus of responsible and agile approaches to information use

Sadiq, Shazia, Aryani, Amir, Demartini, Gianluca, Hua, Wen, Indulska, Marta, Burton-Jones, Andrew, Khosravi, Hassan, Benavides-Prado, Diana, Sellis, Timos, Someh, Ida, Vaithianathan, Rhema, Wang, Sen, and Zhou, Xiaofang (2022). Information Resilience: the nexus of responsible and agile approaches to information use. The VLDB Journal, https://doi.org/10.1007/s00778-021-00720-2

Abstract: The appetite for effective use of information assets has been steadily rising in both public and private sector organisations. However, whether the information is used for social good or commercial gain, there is a growing recognition of the complex socio-technical challenges associated with balancing the diverse demands of regulatory compliance and data privacy, social expectations and ethical use, business process agility and value creation, and scarcity of data science talent. In this vision paper, we present a series of case studies that highlight these interconnected challenges, across a range of application areas. We use the insights from the case studies to introduce Information Resilience, as a scaffold within which the competing requirements of responsible and agile approaches to information use can be positioned. The aim of this paper is to develop and present a manifesto for Information Resilience that can serve as a reference for future research and development in relevant areas of responsible data management.


Improving Social Alignment During Digital Transformation

Andrew Burton-Jones, Alicia Gilchrist, Peter Green, Michael Draheim (2020). Improving Social Alignment During Digital Transformation. Communications of the ACM https://doi.org/10.1145/3410429

Abstract: Exploring what leaders can do to improve and sustain social alignment over time.