CIRES @ AJCAI-25: Towards Resilient AI Systems

Great experience representing CIRES at AJCAI 2025 in Canberra!

On 2 December, CIRES hosted both a workshop and a panel discussion on resilient and responsible AI systems. A special highlight was having Dr Yanbin Liu join our panel, his insights added great depth to the discussion!

Thanks to Xuwei (Ackesnal) Xu, Haodong Hong and Zhuoxiao Chen for their excellent presentations, and to everyone who attended our workshop.

And special thanks to organiser, CIRES Postdoctoral Researcher, Dr Zixin Wang, for her dedication and excellent coordination.

UQ Guide on Responsible Use of Generative AI in Research

CIRES Centre Director, Professor Shazia Sadiq contributed to creating the UQ Guide on Responsible Use of Generative AI in Research, available to all staff at The University of Queensland. The Guide outlines risks and opportunities across the research lifecycle—idea generation, literature review, hypothesis development, coding, data analysis, manuscript preparation, and visualisation. It includes discipline-specific examples, guidance for grant applications (ARC/NHMRC), disclosure requirements, and an appendix on data handling, copyright, and privacy implications.

UQCEAI Workshop: Proof of Concept to Production for Enterprise AI

On 12th November 2025, the UQ Centre for Enterprise AI hosted the workshop “Getting from Proof of Concept to Production for Enterprise AI.” With over 50 participants joining from industry, government, and academia, the conversation focused on the systematic challenges facing organisations in implementing Enterprise AI at scale, and the key measures to consider when evaluating success in AI adoption and integration. The panel discussion moderated by Nicole Hartley focused on unlocking AI value beyond POC and what it takes to get there. Panel members were Ryan van Leent (SAP), Nathan Bines (Qld Gov’t), Ida Asadi (UQBS) and Joel Mackenzie (EECS), with the event led by CIRES Centre Director, Prof Shazia Sadiq, and CIRES Centre Research Director, Prof Marta Indulska (photo below). More details here and you can find out more about the Centre here.

QPS Partner at International Association of Chiefs of Police Conference

CIRES Partner Investigator and Queensland Police Service (QPS) staff, Mr Nicholas Moss, attended the International Association of Chiefs of Police (IACP 2025) Annual Conference and Exposition in Denver Colorado USA, 18-21 October 2025. Nicholas presented results from the CIRES-QPS project on community-engagement on the use of data. This project aims to understand community attitudes towards data analytics, particularly in policing.

UQ-UZH Symposium and Public Lecture

On 7-8 October, CIRES CI Professor Gianluca Demartini participated in the UQ-University of Zurich (UZH) Symposium: Challenges and Opportunities for Social and System Change, presenting an update on the joint UQ-UZH project on Digital Deliberative Democracy (d3-project.ch) funded by the Swiss National Science Foundation, and as well as serving as the expert responder for the UQ-UZH public lecture A Toolbox for Human-AI Collaboration in Lifespan Health Analysis.

Empowering Learners in the Age of AI

On 8 October, CIRES CI A/Prof Hassan Khosravi moderated a panel  as part of the free 2025 on Empowering Learners in the Age of AI.

Hassan moderated this panel with Professor Jason M. Lodge (The University of Queensland), Dr Aneesha Bakharia (The University of Queensland) and Peter Xing (Microsoft) as distinguished panellists, bringing deep expertise and diverse perspectives to this important conversation.

Why this matters
The use of Large Language Models (LLMs) has demonstrated clear performance gains for students. Yet performance is only part of the story. Scholars caution that polished outputs may come at the expense of genuine learning, as students risk offloading critical thinking and problem-solving to AI. This panel explores how we can move beyond productivity to design AI learning companions that prioritise learning gains over performance gains, nurturing curiosity, understanding, and deeper engagement. 

Paper: Understanding failures in health data protection

Many health data breaches aren’t just caused by hackers. Inadequate processes and irresponsible use of health data often create opportunities for serious cybersecurity incidents. In our study, experts recounted staff admitting, “I didn’t know, nobody told me,” or using personal Gmail for sensitive communications. One cybersecurity expert observed, “[Healthcare ecosystems] are not good at these things [data protection by design]. We say that they don’t bake in security; they just bake the cake and spray on some [cyber]security.”

Our mixed-methods study, published in Behaviour & Information Technology (open access explored these critical vulnerabilities in health data protection. We gathered insights from cybersecurity and privacy experts across 14 countries, including CISOs, IT security officers, researchers, privacy managers, and Data Protection Officers.

We identified 30 failure factors and, using the People-Process-Technology framework, unpacked the top seven:

  • People: non-compliant behaviour, and lack of cybersecurity awareness
  • Process: inadequate risk management, weak data integrity monitoring, and a lack of breach response and recovery plans
  • Technology: unsecure third-party applications, and a lack of data protection by design

These factors often interlink, creating complex vulnerabilities. With the growing adoption of big data analytics and AI in healthcare, understanding these failure points is crucial. Our model offers actionable insights for healthcare organisations to strengthen data protection, develop mitigation policies, and reduce the risk of breaches, ensuring safer care and maintaining trust.

Towards a model for understanding failures in health data protection: a mixed-methods study, Javad Pool, Saeed Akhlaghpour, Farkhondeh Hassandoust, Farhad Fatehi & Andrew Burton-Jones

AI Horizons – Conversations with Australia’s leading and emerging researchers

On 22 September, CIRES Centre Director Prof. Shazia Sadiq FTSE hosted AI Horizons – Conversations with Australia’s leading and emerging researchers, an inspiring event organised by the Australian Academy of Technological Sciences & Engineering (ATSE). The event brought together brilliant minds in AI—from established experts to rising stars—to explore the future of artificial intelligence in Australia.

Speakers: Dr Sue Keay FTSE UNSW AI Institute, Dr Scarlett Raine & Prof. Michael Milford FTSE QUT Centre for Robotics, & Hung Lee & Distinguished Prof. Svetha Venkatesh FTSE FAA Deakin University.

Watch on YouTube: AI horizons – Conversations with Australia’s leading and emerging researchers

 

 

Research Insight: New Approach for Irregular Time Series in Healthcare AI

EMIT: A New Approach for Irregular Time Series in Healthcare AI

Excited to share our paper “EMIT: Event-Based Masked Auto Encoding for Irregular Time Series” published at ICDM 2024. Together with A/Prof. Sen WANG, Dr Ruihong Qiu, A/Prof. Adam Irwin and Prof. Shazia Sadiq, we explore how irregular time series (like vital signs and lab results recorded at uneven intervals) challenge existing AI models and how our proposed framework, EMIT, improves clinical decision support through better representation learning. Special thanks to CIRES, Queensland Health and The University of Queensland for supporting this research.

Read full paper at https://arxiv.org/pdf/2409.16554

 

Our Approach
We introduce EMIT, a pretraining framework based on transformer architecture, tailored for irregular clinical time series data. EMIT learns by:

  • Finding important points in irregular time series
  • Pretraining by masking and predicting those points
  • Use the pretrained model for any downstream task (e.g., outcome prediction)

Key Findings

Improved Representation Learning: EMIT captures important variations without losing timing information, outperforming generic pretext approaches for irregular time series.

Data Efficiency: On benchmark healthcare datasets (MIMIC-III & PhysioNet Challenge 2012), EMIT achieved strong results using only 50% of labeled data, reducing reliance on costly annotations.

Task Relevance: By designing pretext tasks specific to irregular time series, EMIT delivers more reliable clinical predictions compared to standard forecasting approaches.

How can we design AI that adapts to the messy, irregular reality of clinical data while still delivering trustworthy predictions?

Made in Australia – Our AI Opportunity

On 22 August, the Australian Academy of Technological Sciences & Engineering (ATSE) released Made in Australia – Our AI Opportunity, a bold action statement co-authored by CIRES Centre Director Shazia Sadiq and CIRES Strategy Board Member Sue Keay, calling for strategic investment in sovereign AI capability. The report proposes a mission-based approach, including the creation of AI factories—regional hubs for talent, research, and innovation—to ensure Australia’s position as a global leader in safe, sustainable, ethical, and high-impact sovereign AI.

Read the Made in Australia AI Action Statement.

Artificial intelligence (AI) is radically reshaping work, education and security in Australia, and is officially recognised as a critical technology in government policy. How we harness it will impact the nation’s economic prosperity, national security and continuing innovation.

The global race to build AI capabilities is accelerating, and it is incumbent on us to harness our comparative advantages and secure control of our data and digital systems. Without timely and comprehensive public and private investments in sovereign AI capability, Australia runs the risk of becoming dependent on foreign technology providers with their own commercial and national interests.

Australia already has the ingredients to develop sovereign AI capability, and is ready to leverage these, with appropriate government leadership and investment. ATSE proposes a mission-based approach, with AI factories located across Australia: the jewel in the AI crown around which talent and partnerships will develop. This statement outlines how targeted investment in a strong national AI capability can position Australia as a global leader in safe, sustainable, ethical and high-impact sovereign AI. It shows how these investments will give us the autonomy we want as a nation whilst enhancing productivity and prepare the nation for future transitions in manufacturing and knowledge work, unlocking value across the entire economy.

This statement builds on ATSE’s 2022 vision statement, Strategic Investment in Australia’s Artificial Intelligence Capacity.

CIRES is 4!

On 20 August 2025 we celebrated our Centre’s 4th birthday with UQ and Swinburne colleagues, and four years of research collaboration, impactful partnerships, and a growing a community dedicated to building a more resilient, inclusive, and ethical digital future. Since our launch in 2021, CIRES has:

  • Delivered cutting-edge research in human-centred AI and information resilience
  • Fostered strong collaborations across academia, industry, and government
  • Supported the next generation of researchers and innovators
  • Helped shape national conversations on responsible technology

We reviewed our 2021-2025 YTD performance stats (see pics), and after that effort, we thought we definitely deserved two cakes to celebrate.

From our Director, Professor Shazia Sadiq FTSE: “CIRES was founded with a bold vision — to reduce socio-technical barriers to data driven transformation. Four years on, I’m proud of how far we’ve come and grateful to our team and collaborators who continue to pursue our mission of Information Resilience.”

We’re proud of what we’ve achieved — and even more excited about what’s ahead, including our first PhD graduates. Thank you to our researchers, partners, and supporters who have been part of this journey.

 

Research Insight: Personally Identifiable Information (PII)

CIRES PhD Researcher Pa Pa Khin travelled to Canada in August to present at AMCIS 2025 on the challenges industries face in identifying and managing Personally Identifiable Information (PII) within Systems of Engagement. It was a great opportunity to emphasise the importance of controlling sensitive and vital information, which we share and use informally, ad hoc, or formally across diverse collaboration and communication systems, and its value creation. Pa Pa’s work introduces a foundational framework, a locus for control with five key elements in place.

“I am happy to share that our paper, “From Chaos to Clarity: Identifying and Managing Personally Identifiable Information in Systems of Engagement”, was presented at the Americas Conference on Information Systems (AMCIS) 2025 organised by the Association for Information Systems. Together with A. Prof Paul Scifleet, we explore the significant challenges organisations face in identifying and managing Personally Identifiable Information (PII) within Systems of Engagement, as described in the current industry discourse. Based on our findings, we develop a locus for control for sensitive and vital information management with five key elements in place (i) the identification and location of information assets, (ii) their traceability, (iii) protection and security, (iv) compliance and governance, (v) use and value creation.”

Thanks to CIRES and our industry partner Astral for supporting this work. 

Research Insight: Knowledge Tracing

Congratulations to our PhD researcher Mehrnoush Mohammadi who recently presented at the 26th International Conference on Artificial Intelligence in Education (AIED2025) 22-26 July 2025 in Palermo, Italy. This year’s conference theme was AI as a catalyst for inclusive, personalised, & ethical education, to empower teachers & students for an equitable future. Full details and link to paper below.

“I had the opportunity to present a poster our accepted paper: “Knowledge Tracing with A Temporal Hypergraph Memory Network“.

Research Spotlight: This work presents THMN, a Temporal Hypergraph Memory Network, a hybrid Knowledge Tracing model that combines memory-augmented networks with temporal hypergraph reasoning to capture dynamic, high-order concept interactions over time. By modeling how a student’s understanding of concepts shifts across diverse question contexts and scaling updates based on practice diversity, THMN delivers composition-aware, interpretable predictions and consistently outperforms state-of-the-art KT models across four benchmark datasets.

It was an incredible experience connecting with researchers, exchanging ideas, and sharing our work with the global AI in Education community.

Special thanks to my amazing co-authors Dr.Kamal Berahmand, CIRES Centre Director Prof. Shazia Sadiq, and CIRES Chief Investigator, Dr. Hassan Khosravi, for their incredible collaboration, and to the AIED community for the warm welcome and insightful feedback.”

Kingston AI Group plans continuing advocacy for Australia

On 24th July, CIRES Centre Director, Professor Shazia Sadiq and the UQ Centre for Enterprise AI hosted the Kingston AI Group meeting at the The University of Queensland’s St Lucia campus.

The meeting allowed members to come together and strategise their advocacy for Australian AI sovereign capability, discuss emerging AI fields, and identify the clearest, most effective ways to drive the group’s engagement and advocacy with the federal government, including the Prime Minister.

Mike Bareja, Director of Digital Technologies, AI, Cyber and Future Industries at the Business Council of Australia (BCA) was in attendance, leading an informative discussion on BCA’s take on the role of AI in Australia’s corporate landscape. He also provided an overview of “Accelerating Australia’s AI Agenda,” a BCA report released in June that has been enthusiastically endorsed by the Kingston AI Group.

Much of the meeting was spent discussing AI’s impact on the economy as well as ways to increase investment in AI R&D in Australia. Members also discussed the importance of protecting Australian culture and values in AI, and increasing awareness of Australia’s burgeoning AI industry while raising the skills and capabilities of those working within it.

To read more, please visit ➡️ https://lnkd.in/ggpAkM-Z

Photo caption: Participants and visitors to the 24 July Kingston AI Group meeting in Brisbane, (L-R) Dr Nisha Schwarz, PhD, Dr Rocky Chen, Prof Shazia Sadiq, Dr Joel Mackenzie, Dr Kathy Nicholson, Prof Shane Culpepper, Dr Sue Keay, Dr Paul Dalby PhD GIA (Affiliated), Prof Michael Milford, Prof Simon Lucey, BCA’s Mike Bareja, Prof Anton van den Hengel, Prof Benjamin Rubinstein, Prof Stephen Gould, Prof Ian Reid, Prof Ajmal Mian, and Prof Marta Indulska at the Kingston AI Group meeting 24 July in Brisbane.

Not pictured but in attendance: Prof Joanna Batstone, PhD, Prof Dana Kulic, and Prof Toby Walsh FAA FTSE FRSN

CIRES Hosts Panel on Cross-disciplinary research

Cross-disciplinary research fosters innovation by integrating diverse perspectives, leading to more holistic and impactful solutions. Complex global problems rarely fit neatly within disciplinary boundaries, and collaboration across fields is essential to address challenges that no single discipline can solve alone. As they say “Teamwork Makes The Dream Work”!

On 15 July, CIRES hosted a Q&A discussion at The University of Queensland as part of our Lunch & Learn series with guest speakers Professor Xue Li, Dr Aneesha Bakharia, and Dr Avijit Sengupta. These UQ experts span Computer Science, AI, and Business Information Systems, and key application areas of Education, Health, and Technology. They shared insights into:

  • what makes cross-disciplinary research successful,
  • how to build and sustain collaborative teams,
  • how students can benefit from this approach in both academic and industry pathways.

They also discussed how different aspects of cross-disciplinary research, including collaboration, ethics, and decision-making, could be transformed with GenerativeAI proliferation.

“It was inspiring to hear from researchers across disciplines sharing not only their successes but also the real challenges of collaboration. Cross-disciplinary research pushes us to rethink assumptions and explore unexpected connections.” – Dr Zixin Wang, CIRES Postdoctoral Research Fellow.

“I learned that successful cross-disciplinary research depends on firstly understanding the specific and most important problems and needs of other domains, to effectively apply one’s expertise. For junior researchers, this involves strategically focusing on a publishable core contribution, ensuring clear communication, and prioritising critical aspects, like data quality, and incorporating a human in the loop, for responsible system deployment.” – Dr Javad Pool, CIRES Postdoctoral Research Fellow.

“Attending the session provided me with insights into the complexity of cross-disciplinary research and valuable lessons from experienced researchers. My main takeaways are ensuring we solve problems faced by domain experts and be ready to learn different things!” – Nova Sepadyati, UQ PhD Researcher

Huge thanks to our speakers for sharing their valuable experience with the group and to our CIRES Postdoc Team – Stanislav Pozdniakov, Javad Pool, Zixin Wang, and Xuwei (Ackesnal) Xu – for organising such a thought-provoking session!

Research Insight: Budgeted Causal Effect Estimation

Two excellent publications & research insights from the CIRES team at The University of Queensland including PhD researcher Hechuan Wen, CIs Dr. Rocky Chen, Prof. Hongzhi Yin, and Centre Director Prof. Shazia Sadiq, and colleagues. Thank you to our partners Dr. Li Kheng Chai and Health and Wellbeing Queensland for supporting this work. 

Delighted to share our two accepted works in FY2024 – 2025: “Progressive Generalization Risk Reduction for Data-Efficient Causal Effect Estimation” & “Enhancing Treatment Effect Estimation via Active Learning: A Counterfactual Covering Perspective” by KDD’25 & ICML’25, respectively.

Together with my supervisors and collaborators: Dr. Rocky Chen, Dr. Li Kheng Chai, Dr. Guanhua Ye, A/Prof. Mingming Gong, Prof. Yin Hongzhi, and Prof. Shazia Sadiq, we study the theoretical foundations for the budgeted causal effect estimation and propose simple yet effective data acquisition scheme to “valuate” the unlabelled data and prioritize the budget spending on labelling the most informative data. Huge Thanks to the ARC Training Centre for Information Resilience (CIRES), Health and Wellbeing Queensland, and The University of Queensland for supporting this work!

Read the papers: https://lnkd.in/gH4HJwg7 and https://lnkd.in/gp_7xtGj.

What we did
We identify optimizable quantities by rigorous theoretical analysis, which serves as the guidelines to “valuate” the unlabelled data points and promote the efficiency of budget spending. That means, given the vast unlabeled data pool, the labelling budget can be spent most effectively for the proposed target when building up the dataset for causal effect estimation.

Key Insights
► The most valuable unlabelled pair (control and treated) is acquired with the highest estimation variance and smallest distance in between.
► The overall estimation risk (incalculable directly) can be well bounded (indirectly) by the computable terms, i.e., factual covering and counterfactual covering radii, to give theoretical groundings for unlabelled data valuation/selection.

Forward
The computational cost when operating the proposed algorithm on very large unlabelled pool set is considerable, the future work on improving the algorithm’s scalability is worth exploring.

What opportunities or risks do you see in building up the dataset for model training from scratch?

Research Insight: AI Explainability & Transparency

Publication and research insights from the CIRES team at Swinburne University of Technology. Great work by PhD researcher Lufan Zhang and Chief Investigator Assoc. Prof Paul Scifleet. Thank you to our industry partner Astral for supporting this work. 

AI Explainability & Transparency in Enterprise Information Management (EIM)

Happy to share our paper “Charting the Transformation of Enterprise Information Management: AI Explainability and Transparency in EIM Practice” presented at the 16th International Conference on Knowledge Management and Information Systems.

Together with Paul Scifleet, we examine how AI (including Gen-AI) is reshaping the way enterprises manage critical information—and how to make these transformations more explainable and trustworthy for EIM practitioners.

Special thanks to ARC Training Centre for Information Resilience (CIRES), Swinburne University of Technology, and our industry partner Astral  for supporting this work.

Read the full paper here: https://shorturl.at/xHSoO

Our Approach:
We conducted an environmental scan of 20 leading EIM vendor platforms, analysing their publicly available content to explore how AI is being integrated into EIM platforms. We examined AI’s role across five areas:
▶️ AI Development
▶️ AI Techniques
▶️ AI-integrated EIM Capabilities
▶️ AI Applications
▶️ AI Impacts on EIM Practice

Key Findings
AI Development:
Only 12 of 20 platforms disclose details about how AI is developed—leaving 40% with minimal transparency. Fewer than half provide adequate information on model training data, explainability features, or human-AI interaction.
AI Techniques:
Dense AI techniques details for applying specific AI techniques across the EIM Lifecycle. Use of only generic terms such as “AI” and “ML” terms providing no useful information to determine AI tools for specific IM needs.
AI Applications & AI-integrated EIM Capabilities:
Many vendors highlight lower-level AI applications but do not fully connect these to higher-level EIM capabilities.
AI Impacts:
While benefits are frequently promoted, risks, limitations, and proven real-world outcomes are rarely addressed—raising concerns for transparency and trust.

Overall, those findings suggest a critical need for contextualised AI Transparency in EIM landscape, including both ✅ information transparency (disclosing information relevant to EIM practitioners) and ✅ transparency-in-use (intuitive user interface and human in the loop) to enable a better explainability: for understanding, trust and adoption of AI applications for EIM practitioners, and note the interdependency in these concepts.

Future Directions
⭐ Engage with IM practitioners to validate and refine the requirements for AI transparency within the EIM context.
⭐ Develop Explainable AI approaches that supports human agency in real-world EIM practices.

How can we make AI more explainable to EIM practitioners to empower their daily IM practices?