News

Research Insight: AI Explainability & Transparency

Publication and research insights from the CIRES team at Swinburne University of Technology. Great work by PhD researcher Lufan Zhang and Chief Investigator Assoc. Prof Paul Scifleet. Thank you to our industry partner Astral for supporting this work.  AI Explainability & Transparency in Enterprise Information Management (EIM) Happy to share our paper “Charting the Transformation […]

Publication and research insights from the CIRES team at Swinburne University of Technology. Great work by PhD researcher Lufan Zhang and Chief Investigator Assoc. Prof Paul Scifleet. Thank you to our industry partner Astral for supporting this work. 

AI Explainability & Transparency in Enterprise Information Management (EIM)

Happy to share our paper “Charting the Transformation of Enterprise Information Management: AI Explainability and Transparency in EIM Practice” presented at the 16th International Conference on Knowledge Management and Information Systems.

Together with Paul Scifleet, we examine how AI (including Gen-AI) is reshaping the way enterprises manage critical information—and how to make these transformations more explainable and trustworthy for EIM practitioners.

Special thanks to ARC Training Centre for Information Resilience (CIRES), Swinburne University of Technology, and our industry partner Astral  for supporting this work.

Read the full paper here: https://shorturl.at/xHSoO

Our Approach:
We conducted an environmental scan of 20 leading EIM vendor platforms, analysing their publicly available content to explore how AI is being integrated into EIM platforms. We examined AI’s role across five areas:
▶️ AI Development
▶️ AI Techniques
▶️ AI-integrated EIM Capabilities
▶️ AI Applications
▶️ AI Impacts on EIM Practice

Key Findings
AI Development:
Only 12 of 20 platforms disclose details about how AI is developed—leaving 40% with minimal transparency. Fewer than half provide adequate information on model training data, explainability features, or human-AI interaction.
AI Techniques:
Dense AI techniques details for applying specific AI techniques across the EIM Lifecycle. Use of only generic terms such as “AI” and “ML” terms providing no useful information to determine AI tools for specific IM needs.
AI Applications & AI-integrated EIM Capabilities:
Many vendors highlight lower-level AI applications but do not fully connect these to higher-level EIM capabilities.
AI Impacts:
While benefits are frequently promoted, risks, limitations, and proven real-world outcomes are rarely addressed—raising concerns for transparency and trust.

Overall, those findings suggest a critical need for contextualised AI Transparency in EIM landscape, including both ✅ information transparency (disclosing information relevant to EIM practitioners) and ✅ transparency-in-use (intuitive user interface and human in the loop) to enable a better explainability: for understanding, trust and adoption of AI applications for EIM practitioners, and note the interdependency in these concepts.

Future Directions
⭐ Engage with IM practitioners to validate and refine the requirements for AI transparency within the EIM context.
⭐ Develop Explainable AI approaches that supports human agency in real-world EIM practices.

How can we make AI more explainable to EIM practitioners to empower their daily IM practices?

Latest News

View All News