Research Insight: AI in Learning Analytics

New publication from the CIRES team at The University of Queensland. Congratulations to PhD researcher Mehrnoush Mohammadi, Chief Investigators Assoc. Prof. Hassan Khosravi, Prof. Wojtek Tomaszewski, Centre Director Prof. Shazia Sadiq, and colleagues & co-authors Elham Tajik and Roberto Martinez-Maldonado. Full details & insights below.

“Delighted to share our newly published work, “Artificial Intelligence in Multimodal Learning Analytics: A Systematic Review,” in Computers & Education: Artificial Intelligence. We chart the evolving intersection of AI and Multimodal Learning Analytics (MMLA), providing the first comprehensive systematic review in this space. Huge Thanks to the ARC Training Centre for Information Resilience (CIRES) for supporting this work!

What We Did: We reviewed 686 records (2019–2024), synthesising 43 peer-reviewed studies to develop a structured framework for integrating AI in the MMLA pipeline—from data collection and pre-processing to modelling and feedback.

Key Insights:
► AI is transforming MMLA’s modelling and analysis layers, but links to pedagogy and impact on learning remain underdeveloped.
► Research is concentrated in higher education and lab settings, with limited focus on early learning, diverse stakeholders, or ecological validity.
► While AI enhances real-time feedback and insight generation, challenges like small sample sizes, generalisability, and transparency persist.

Forward: With the rapid rise of generative AI, new opportunities are emerging to advance MMLA, enabling richer feedback, adaptive interventions, context-aware support, and deeper insights into human learning.

What opportunities or risks do you see in embedding AI across multimodal learning environments?

 

UQ AI PhD Showcase

On the 26th and 27th June, the UQ AI Research Network, led by CIRES Director, Prof Shazia Sadiq, hosted the 2025 UQ AI PhD Showcase over two days at the St Lucia Campus. The event brought together over 40 PhD students from diverse disciplines, leading academics, industry experts, and members of the UQ community for two days of vibrant discussion, cutting-edge research presentations, and collaborative networking.

With AI continuing to reshape industries and societies worldwide, this showcase provided a timely platform for exploring the development, application, and implications of artificial intelligence across disciplines. From health and agriculture to digital safety and governance, the breadth of research on display highlighted UQ’s commitment to advancing responsible and impactful AI.

Special thanks to the organising team led by Dr Alina Bialkowski (Chair), CIRES CI Dr Rocky Chen, Dr Xin Yu, and Professor Shane Culpepper for an excellent event!

 

 

Tracking 20 Years of Coronavirus Research

New publication and dataset from the CIRES team at Swinburne University of Technology. Congratulations to Chief Investigator Assoc. Prof. Amir Aryani, Data Engineer Zhuochen Wu, Postdoc Dr Hui Yin, and all their colleagues and co-authors for this important piece of work. Full details below.

“Excited to share our new paper, “Coronavirus research topics, tracking twenty years of research” published in Nature Scientific Data (June 2025). We have developed an AI-assisted pipeline to systematically catalogue and synthesise 800,000+ research articles on coronaviruses from 2002 to 2024. The result is a comprehensive dataset that organises this body of literature into key thematic clusters, from vaccine development to public health strategies and mental health impacts.

This work was supported by funding from Swinburne University of Technology and ARC Training Centre for Information Resilience (CIRES).

You can read the paper and access the dataset here.

 

Research Insight: Online AI Systems

Delighted to share this research insight and latest work from CIRES PhD researcher Hongliang Niat The University of Queensland, supervised by Chief Investigator Prof. Gianluca Demartini. It proposes an operationalising research framework to enhance the harmlessness of foundation models. Full details below & link to paper: https://lnkd.in/geQrmj3i
“Excited to share my recent work, “Operationalising Harmlessness in Online AI Systems“, presented at ACM Web Conference 2025! Grateful for the guidance of my supervisor Professor Gianluca Demartini and the support of the ARC Training Centre for Information Resilience (CIRES). This project tackles the growing challenge of ensuring fairness, accountability, and legal compliance in foundation models—especially when operating under black-box constraints or without sensitive user data.
Approach:

► We propose an operationalising research framework for harmlessness, structured around three core questions:
► How do foundation models amplify dataset harm?
► How can we ensure harmlessness under strict black-box settings?
► What proactive explanation strategies meet legal and societal expectations?

Key Insights:
► Even with inclusive training data, foundation models exhibit performance disparities—our framework investigates dataset–model interactions to guide ethical design.
► Traditional fairness methods often require access to sensitive data. We propose Reckoner, a two-stage learning framework that ensures fairness without this requirement.
► We highlight the need for proactive algorithmic accountability, balancing transparency with IP protection and model security.
Future Directions
► Build compliance-ready harmlessness techniques tailored for real-world AI deployment.
► Enable dual-perspective fairness guidance—supporting both upstream model developers and downstream users.
Scan the poster QR code  to learn more.
How do you think we can ensure accountable and fair AI under real-world legal constraints?