A huge congratulations to our CIRES PhD Researcher, Daisy Xu, who is the 2025 DeSanctis Award Winner presented by the Communication, Digital Technology, and Organization (CTO) division of the Academy of Management. This award recognizes outstanding scholarship in the area of communication and digital technology, specifically for a solo-authored conference paper based on a recent dissertation.
Month: July 2025
Kingston AI Group plans continuing advocacy for Australia
On 24th July, CIRES Centre Director, Professor Shazia Sadiq and the UQ Centre for Enterprise AI hosted the Kingston AI Group meeting at the The University of Queensland’s St Lucia campus.
The meeting allowed members to come together and strategise their advocacy for Australian AI sovereign capability, discuss emerging AI fields, and identify the clearest, most effective ways to drive the group’s engagement and advocacy with the federal government, including the Prime Minister.
Mike Bareja, Director of Digital Technologies, AI, Cyber and Future Industries at the Business Council of Australia (BCA) was in attendance, leading an informative discussion on BCA’s take on the role of AI in Australia’s corporate landscape. He also provided an overview of “Accelerating Australia’s AI Agenda,” a BCA report released in June that has been enthusiastically endorsed by the Kingston AI Group.
Much of the meeting was spent discussing AI’s impact on the economy as well as ways to increase investment in AI R&D in Australia. Members also discussed the importance of protecting Australian culture and values in AI, and increasing awareness of Australia’s burgeoning AI industry while raising the skills and capabilities of those working within it.
To read more, please visit ➡️ https://lnkd.in/ggpAkM-Z
Photo caption: Participants and visitors to the 24 July Kingston AI Group meeting in Brisbane, (L-R) Dr Nisha Schwarz, PhD, Dr Rocky Chen, Prof Shazia Sadiq, Dr Joel Mackenzie, Dr Kathy Nicholson, Prof Shane Culpepper, Dr Sue Keay, Dr Paul Dalby PhD GIA (Affiliated), Prof Michael Milford, Prof Simon Lucey, BCA’s Mike Bareja, Prof Anton van den Hengel, Prof Benjamin Rubinstein, Prof Stephen Gould, Prof Ian Reid, Prof Ajmal Mian, and Prof Marta Indulska at the Kingston AI Group meeting 24 July in Brisbane.
Not pictured but in attendance: Prof Joanna Batstone, PhD, Prof Dana Kulic, and Prof Toby Walsh FAA FTSE FRSN
CIRES Hosts Panel on Cross-disciplinary research
Cross-disciplinary research fosters innovation by integrating diverse perspectives, leading to more holistic and impactful solutions. Complex global problems rarely fit neatly within disciplinary boundaries, and collaboration across fields is essential to address challenges that no single discipline can solve alone. As they say “Teamwork Makes The Dream Work”!
On 15 July, CIRES hosted a Q&A discussion at The University of Queensland as part of our Lunch & Learn series with guest speakers Professor Xue Li, Dr Aneesha Bakharia, and Dr Avijit Sengupta. These UQ experts span Computer Science, AI, and Business Information Systems, and key application areas of Education, Health, and Technology. They shared insights into:
- what makes cross-disciplinary research successful,
- how to build and sustain collaborative teams,
- how students can benefit from this approach in both academic and industry pathways.
They also discussed how different aspects of cross-disciplinary research, including collaboration, ethics, and decision-making, could be transformed with GenerativeAI proliferation.
“It was inspiring to hear from researchers across disciplines sharing not only their successes but also the real challenges of collaboration. Cross-disciplinary research pushes us to rethink assumptions and explore unexpected connections.” – Dr Zixin Wang, CIRES Postdoctoral Research Fellow.
“I learned that successful cross-disciplinary research depends on firstly understanding the specific and most important problems and needs of other domains, to effectively apply one’s expertise. For junior researchers, this involves strategically focusing on a publishable core contribution, ensuring clear communication, and prioritising critical aspects, like data quality, and incorporating a human in the loop, for responsible system deployment.” – Dr Javad Pool, CIRES Postdoctoral Research Fellow.
“Attending the session provided me with insights into the complexity of cross-disciplinary research and valuable lessons from experienced researchers. My main takeaways are ensuring we solve problems faced by domain experts and be ready to learn different things!” – Nova Sepadyati, UQ PhD Researcher
Huge thanks to our speakers for sharing their valuable experience with the group and to our CIRES Postdoc Team – Stanislav Pozdniakov, Javad Pool, Zixin Wang, and Xuwei (Ackesnal) Xu – for organising such a thought-provoking session!
Research Insight: Personally Identifiable Information (PII)
Congratulations to CIRES PhD Researcher Pa Pa Khin & Chief Investigator A.Prof. Paul Scifleet from Swinburne University of Technology on their conference paper acceptance! Pa Pa will be travelling to Canada next month to present at AMCIS 2025 organised by the Association for Information Systems. Thanks to our industry partner Astral for supporting this work. Full details from Pa Pa below.
“I am happy to share that our paper, “From Chaos to Clarity: Identifying and Managing Personally Identifiable Information in Systems of Engagement”, has been accepted at the Americas Conference on Information Systems (AMCIS) 2025.
Together with A. Prof Paul Scifleet, we explore the significant challenges organisations face in identifying and managing Personally Identifiable Information (PII) within Systems of Engagement, as described in the current industry discourse. Based on our findings, we develop a locus for control for sensitive and vital information management with five key elements in place (i) the identification and location of information assets, (ii) their traceability, (iii) protection and security, (iv) compliance and governance, (v) use and value creation.”
Read the full paper here: https://lnkd.in/gx3RYjQe
Research Insight: Budgeted Causal Effect Estimation
Two excellent publications & research insights from the CIRES team at The University of Queensland including PhD researcher Hechuan Wen, CIs Dr. Rocky Chen, Prof. Hongzhi Yin, and Centre Director Prof. Shazia Sadiq, and colleagues. Thank you to our partners Dr. Li Kheng Chai and Health and Wellbeing Queensland for supporting this work.
Delighted to share our two accepted works in FY2024 – 2025: “Progressive Generalization Risk Reduction for Data-Efficient Causal Effect Estimation” & “Enhancing Treatment Effect Estimation via Active Learning: A Counterfactual Covering Perspective” by KDD’25 & ICML’25, respectively.
Together with my supervisors and collaborators: Dr. Rocky Chen, Dr. Li Kheng Chai, Dr. Guanhua Ye, A/Prof. Mingming Gong, Prof. Yin Hongzhi, and Prof. Shazia Sadiq, we study the theoretical foundations for the budgeted causal effect estimation and propose simple yet effective data acquisition scheme to “valuate” the unlabelled data and prioritize the budget spending on labelling the most informative data. Huge Thanks to the ARC Training Centre for Information Resilience (CIRES), Health and Wellbeing Queensland, and The University of Queensland for supporting this work!
Read the papers: https://lnkd.in/gH4HJwg7 and https://lnkd.in/gp_7xtGj.
What we did
We identify optimizable quantities by rigorous theoretical analysis, which serves as the guidelines to “valuate” the unlabelled data points and promote the efficiency of budget spending. That means, given the vast unlabeled data pool, the labelling budget can be spent most effectively for the proposed target when building up the dataset for causal effect estimation.
Key Insights
► The most valuable unlabelled pair (control and treated) is acquired with the highest estimation variance and smallest distance in between.
► The overall estimation risk (incalculable directly) can be well bounded (indirectly) by the computable terms, i.e., factual covering and counterfactual covering radii, to give theoretical groundings for unlabelled data valuation/selection.
Forward
The computational cost when operating the proposed algorithm on very large unlabelled pool set is considerable, the future work on improving the algorithm’s scalability is worth exploring.
What opportunities or risks do you see in building up the dataset for model training from scratch?
Research Insight: AI Explainability & Transparency
Publication and research insights from the CIRES team at Swinburne University of Technology. Great work by PhD researcher Lufan Zhang and Chief Investigator Assoc. Prof Paul Scifleet. Thank you to our industry partner Astral for supporting this work.
AI Explainability & Transparency in Enterprise Information Management (EIM)
Happy to share our paper “Charting the Transformation of Enterprise Information Management: AI Explainability and Transparency in EIM Practice” presented at the 16th International Conference on Knowledge Management and Information Systems.
Together with Paul Scifleet, we examine how AI (including Gen-AI) is reshaping the way enterprises manage critical information—and how to make these transformations more explainable and trustworthy for EIM practitioners.
Special thanks to ARC Training Centre for Information Resilience (CIRES), Swinburne University of Technology, and our industry partner Astral for supporting this work.
Read the full paper here: https://shorturl.at/xHSoO
Our Approach:
We conducted an environmental scan of 20 leading EIM vendor platforms, analysing their publicly available content to explore how AI is being integrated into EIM platforms. We examined AI’s role across five areas:
▶️ AI Development
▶️ AI Techniques
▶️ AI-integrated EIM Capabilities
▶️ AI Applications
▶️ AI Impacts on EIM Practice
Key Findings
AI Development:
Only 12 of 20 platforms disclose details about how AI is developed—leaving 40% with minimal transparency. Fewer than half provide adequate information on model training data, explainability features, or human-AI interaction.
AI Techniques:
Dense AI techniques details for applying specific AI techniques across the EIM Lifecycle. Use of only generic terms such as “AI” and “ML” terms providing no useful information to determine AI tools for specific IM needs.
AI Applications & AI-integrated EIM Capabilities:
Many vendors highlight lower-level AI applications but do not fully connect these to higher-level EIM capabilities.
AI Impacts:
While benefits are frequently promoted, risks, limitations, and proven real-world outcomes are rarely addressed—raising concerns for transparency and trust.
Overall, those findings suggest a critical need for contextualised AI Transparency in EIM landscape, including both ✅ information transparency (disclosing information relevant to EIM practitioners) and ✅ transparency-in-use (intuitive user interface and human in the loop) to enable a better explainability: for understanding, trust and adoption of AI applications for EIM practitioners, and note the interdependency in these concepts.
Future Directions
⭐ Engage with IM practitioners to validate and refine the requirements for AI transparency within the EIM context.
⭐ Develop Explainable AI approaches that supports human agency in real-world EIM practices.
How can we make AI more explainable to EIM practitioners to empower their daily IM practices?