Event
AJCAI Workshop: Towards Resilient AI Systems
The ARC Training Centre for Information Resilience (CIRES) is proud to host the Towards Resilient AI Systems workshop, led by Postdoctoral Research Fellow Dr Zixin Wang.
Contact Us
Towards Resilient AI Systems
The ARC Training Centre for Information Resilience (CIRES) is proud to host the Towards Resilient AI Systems workshop, led by Postdoctoral Research Fellow Dr Zixin Wang.
This workshop is part of the Australasian Joint Conference on Artificial Intelligence (AJCAI) 2025 in Canberra.
Artificial Intelligence systems are increasingly deployed in critical domains, from healthcare to finance, where reliability and robustness are paramount. Yet, as these systems grow in complexity, they face challenges such as adversarial attacks, data drift, and ethical risks that can undermine trust and performance.
This workshop brings together four researchers from The University of Queensland to explore strategies for building resilient AI systems—systems that can withstand uncertainty, adapt to changing environments, and maintain integrity under stress.
Key Themes:
- Robustness and Reliability: Techniques for mitigating vulnerabilities and ensuring consistent performance.
- Human-Centric Resilience: Incorporating transparency, fairness, and accountability into AI design.
- Adaptive and Secure AI: Approaches for handling adversarial conditions and evolving data landscapes.
- Evaluation and Benchmarking: Metrics and frameworks for assessing resilience in real-world applications.
Program Highlights:
- Expert talks on cutting-edge research in resilient AI.
- Interactive panel discussion covering practical challenges and solutions.
- Networking opportunities
Join us to shape the future of trustworthy AI systems that serve society responsibly.
Workshop: Towards Resilient AI Systems
Presentations
Dr Zixin Wang, ARC Training Centre for Information Resilience (CIRES), The University of Queensland
Talk title: Navigating Data Challenges in Responsible AI: Conversational Tools for Bias and Drift Detection
Abstract: As AI systems are increasingly deployed in high-stakes domains, the quality and structure of underlying data remain a major source of risk. In this talk, I present two lightweight, LLM-powered tools — BiasNavi and DriftNavi — designed to support practitioners in identifying and addressing data issues. BiasNavi guides users through a multi-stage, conversational pipeline to uncover and mitigate bias in tabular datasets, with domain-personalized explanations and interactive workflows. DriftNavi focuses on distributional drift between training and deployment data, enabling non-ML experts to detect and reason about shift using model-free, explainable techniques. Together, these tools explore how conversational interfaces, modular workflows, and adaptive explanations can empower end-users to critically assess data quality, fairness, and reliability, without requiring deep ML expertise.
Dr Xuwei Xu, ARC Training Centre for Information Resilience (CIRES), The University of Queensland
Talk title: Efficient Vision Transformers via Token-wise and Channel-wise Complexity Reductions
Abstract: In recent years, Vision Transformer (ViT) has achieved remarkable success across various computer vision tasks. However, the strong feature representation capability of ViT comes at the cost of high computational complexity, hindering wide deployments of ViT in real-world scenarios where computing resources are usually constrained. Consequently, optimising the balance between model efficiency and performance becomes a critical research challenge in the study of ViT. However, traditional efficient ViT methods, such as token pruning and network pruning, often result in either image-wise or channel-wise essential information loss, thereby compromising model performance and generalisabilities. We aim to preserve or enrich information for efficient ViT from both image-wise and channel-wise perspectives with negligible computational overheads, thus offering a more practical and scalable solution for real-world ViT applications.
Haodong Hong, The University of Queensland
Talk title: Towards Intelligent and Generalisable Agents for Vision-and-Language Navigation
Abstract: Vision-and-Language Navigation (VLN) works as a cornerstone of embodied AI, aiming to develop agents that can interpret natural language instructions and navigate complex environments. Despite notable progress, existing formulations often operate under simplified assumptions that diverge from real-world conditions, like fixed, obstruction-free navigation graphs, one-time executions without continuous adaptation, and neglect of earlier embodied stages such as exploration and representation construction. I will introduce a series of our works towards these directions. First, we introduce a dataset that incorporates real obstacles such as closed doors and blocked paths, together with a training strategy that helps agents handle mismatches between instructions and actual scenes. Second, we define a new problem setting that emphasizes continuous adaptation in environments with consistent layouts but diverse instruction styles, and construct an extended dataset to support systematic evaluation. Third, we develop the first benchmark that unifies exploration, representation building, and navigation into a single embodied task, along with a strong baseline that highlights the importance of 3D scene understanding for following complex instructions. Together, these advances strengthen VLN’s robustness, adaptability, and scalability, moving embodied navigation closer to practical deployment.
Zhuoxiao (Ivan) Chen, The University of Queensland
Talk title: Generalising 3D Object Detection to Shifted Scenes via Data‑CentricLearning and Adaptation
Abstract: This presentation explores generalizing 3D object detection to shifted and corrupted scenes through data-centric learning and adaptation strategies. Real-world deployment of 3D detectors faces significant performance degradation due to environmental shifts (e.g., cross-dataset variations), sensor failures, and severe weather conditions. To address these challenges, we propose three complementary approaches: (1) CRB, an active learning framework that reduces annotation costs by selecting the most informative point clouds for labeling; (2) ReDB, a domain adaptation method that leverages reliable, diverse, and class-balanced pseudo-labeling for offline adaptation to new environments without manual annotations; and (3) DPO and MOS, test-time adaptation frameworks that enable on-the-fly model updates during deployment to handle real-time shifts and corruptions. Together, these methods provide a comprehensive solution for deploying robust 3D detection systems in dynamic real-world scenarios.
Panel discussion: Information Resilience in AI/ML: Early-Career Views from Industry and Academia
Moderator: Dr Zixin Wang
Panel Members: Dr Xuwei Xu, Haodong Hong, Zhuoxiao (Ivan) Chen
Dr Zixin Wang
CIRES, The University of Queensland
Dr. Zixin Wang is a Postdoctoral Research Fellow at The University of Queensland, working within the ARC Training Centre for Information Resilience (CIRES). Her research focuses on domain adaptation and test-time adaptation in computer vision. She developed DriftNavi, a conversational, modular tool for responsible data workflows, with applications in tabular bias detection and distribution shift analysis. She is also actively involved in academic service.
Dr Xuwei Xu
CIRES, The University of Queensland
Dr. Xuwei Xu is a Postdoctoral Research Fellow based at the University of Queensland. His research focuses on efficient neural networks and vision transformers. His previous work broadly involved methodologies for efficient neural networks, aiming to achieve a better trade-off between efficiency and performance. He proposed token reduction approaches to reduce the computational complexity for vision transformers and assisted in developing advanced knowledge distillation methods to improve lightweight models’ performance. His research results contributed to deploying outperforming deep learning models on edge devices. He is currently working with the Queensland Children’s Hospital on developing a dynamic prediction system for PICU.
Haodong Hong
The University of Queensland
Mr. Haodong Hong is a PhD student from the Data Science group at the School of Electrical Engineering and Computer Science, the University of Queensland (UQ), Australia. He received his Bachelor’s degree in Electronic Engineering from Tsinghua University. His research focuses on multimodal learning, embodied agents, and vision-and-language navigation, under the supervision of Associate Professor Sen Wang, and Associate Professor Jiajun Liu.
Zhuoxiao (Ivan) Chen
The University of Queensland
Zhuoxiao (Ivan) Chen is an AI Scientist Intern at Oracle, where he develops large vision-language models tailored to healthcare data. Concurrently, he is a final-year PhD candidate at The University of Queensland (UQ), Australia, supervised by Dr. Yadan Luo and Prof. Helen Huang, focusing on model generalization for 3D scene understanding. He has published papers in top-tier venues including NeurIPS, ICCV, ACM MM and an oral paper in ICLR, as well as top journals such as TPAMI and IJCV.
Dr Zixin Wang
ARC Training Centre for Information Resilience (CIRES)
The University of Queensland
zixin.wang@uq.edu.au