Postdoctoral Research Fellow position at UQ

Applications close 1st February 2026


We are recruiting for a Level A postdoc fellow to join us in the ARC Training Centre for Information Resilience (CIRES) at The University of Queensland in sunny Brisbane, Australia. 

This position will contribute to cutting-edge research on AI preparedness and the evaluation of AI deployments across public and private sectors. You’ll develop frameworks to assess organisational and data barriers to AI adoption, evaluate implementation outcomes, and analyse the impact of emerging AI policies, standards, and governance guidelines.

This role is mainly based at the ARC Training Centre for Information Resilience (CIRES) at The University of Queensland working closely with leading experts and industry partners. You’ll be supported by senior academics with opportunities for increasing autonomy over time.

We are seeking a candidate with:

  • Completion or near completion of a PhD in Computer Science, Data Science, Information Systems or a related field.
  • Demonstrated expertise in empirical methods, proficiency in data analysis, and deep knowledge of contemporary AI tools and techniques
  • Experience in conducting research projects, demonstrating high-level written and oral communication skills.
  • Peer-reviewed publications in high-impact journals or premiere conferences.
  • Ability to work both independently and as a member of a cross-disciplinary research team.

See the full position description and how to apply:

Applications close Sunday 1 February 2026 at 11:00pm AEST.

This position will be based in the Australian Research Council (ARC) Industrial Transformation Training Centre for Information Resilience (CIRES) at UQ, and working across multiple projects with industry and government partners, providing a wealth of experience in multi-disciplinary teams, research planning, and industry and public sector dynamics. The successful candidate will have the opportunity to work directly with the Centre’s partners, with an expected third of their time dedicated to working with partner organisations.

Questions? For more information about this opportunity, please contact Professor Shazia Sadiq and Professor Marta Indulska.

CIRES @ AJCAI-25: Towards Resilient AI Systems

Great experience representing CIRES at AJCAI 2025 in Canberra!

On 2 December, CIRES hosted both a workshop and a panel discussion on resilient and responsible AI systems. A special highlight was having Dr Yanbin Liu join our panel, his insights added great depth to the discussion!

Thanks to Xuwei (Ackesnal) Xu, Haodong Hong and Zhuoxiao Chen for their excellent presentations, and to everyone who attended our workshop.

And special thanks to organiser, CIRES Postdoctoral Researcher, Dr Zixin Wang, for her dedication and excellent coordination.

UQ Guide on Responsible Use of Generative AI in Research

CIRES Centre Director, Professor Shazia Sadiq contributed to creating the UQ Guide on Responsible Use of Generative AI in Research, available to all staff at The University of Queensland. The Guide outlines risks and opportunities across the research lifecycle—idea generation, literature review, hypothesis development, coding, data analysis, manuscript preparation, and visualisation. It includes discipline-specific examples, guidance for grant applications (ARC/NHMRC), disclosure requirements, and an appendix on data handling, copyright, and privacy implications.

UQCEAI Workshop: Proof of Concept to Production for Enterprise AI

On 12th November 2025, the UQ Centre for Enterprise AI hosted the workshop “Getting from Proof of Concept to Production for Enterprise AI.” With over 50 participants joining from industry, government, and academia, the conversation focused on the systematic challenges facing organisations in implementing Enterprise AI at scale, and the key measures to consider when evaluating success in AI adoption and integration. The panel discussion moderated by Nicole Hartley focused on unlocking AI value beyond POC and what it takes to get there. Panel members were Ryan van Leent (SAP), Nathan Bines (Qld Gov’t), Ida Asadi (UQBS) and Joel Mackenzie (EECS), with the event led by CIRES Centre Director, Prof Shazia Sadiq, and CIRES Centre Research Director, Prof Marta Indulska (photo below). More details here and you can find out more about the Centre here.

QPS Partner at International Association of Chiefs of Police Conference

CIRES Partner Investigator and Queensland Police Service (QPS) staff, Mr Nicholas Moss, attended the International Association of Chiefs of Police (IACP 2025) Annual Conference and Exposition in Denver Colorado USA, 18-21 October 2025. Nicholas presented results from the CIRES-QPS project on community-engagement on the use of data. This project aims to understand community attitudes towards data analytics, particularly in policing.

UQ AI Conference Dry Run Seminar Series

CIRES Centre Director, Professor Shazia Sadiq, participated as a panelist at the:

UQ AI Conference Dry Run Seminar Series

(Part of EECS Data Science Discipline HDR Student Support)

📍 Date & Time: October 13, 2025, 1:00–4:00 PM

📍 Location: 46-402 Seminar/Board Room (Andrew Liveris Building)

About the Event…

The UQ AI Conference Dry Run Seminar Series is an initiative launched by the UQ EECS Data Science Discipline to support our HDR student community. This inaugural event marks the beginning of a recurring seminar series designed to help our students shine on the global stage.

The goals of the series are to:

  • Practice presentations: Give HDR students the chance to rehearse their talks for top international conferences
  • Receive constructive feedback: Present in a supportive environment and gain insights from peers and faculty that can sharpen delivery and content.
  • Showcase UQ research: Share cutting-edge work with a broader audience, raising the visibility of the exciting contributions coming from UQ.
  • Build community: Strengthen connections across research groups and foster collaboration through shared discussion and networking.

Format: 10-minute presentation + 5-minute Q&A session

This series will serve as a training ground for students preparing to present at major venues, while also offering the UQ community a front-row seat to the innovative AI research being developed within our faculty.

UQ-UZH Symposium and Public Lecture

On 7-8 October, CIRES CI Professor Gianluca Demartini participated in the UQ-University of Zurich (UZH) Symposium: Challenges and Opportunities for Social and System Change, presenting an update on the joint UQ-UZH project on Digital Deliberative Democracy (d3-project.ch) funded by the Swiss National Science Foundation, and as well as serving as the expert responder for the UQ-UZH public lecture A Toolbox for Human-AI Collaboration in Lifespan Health Analysis.

Empowering Learners in the Age of AI

On 8 October, CIRES CI A/Prof Hassan Khosravi moderated a panel  as part of the free 2025 on Empowering Learners in the Age of AI.

Hassan moderated this panel with Professor Jason M. Lodge (The University of Queensland), Dr Aneesha Bakharia (The University of Queensland) and Peter Xing (Microsoft) as distinguished panellists, bringing deep expertise and diverse perspectives to this important conversation.

Why this matters
The use of Large Language Models (LLMs) has demonstrated clear performance gains for students. Yet performance is only part of the story. Scholars caution that polished outputs may come at the expense of genuine learning, as students risk offloading critical thinking and problem-solving to AI. This panel explores how we can move beyond productivity to design AI learning companions that prioritise learning gains over performance gains, nurturing curiosity, understanding, and deeper engagement. 

Paper: Understanding failures in health data protection

Many health data breaches aren’t just caused by hackers. Inadequate processes and irresponsible use of health data often create opportunities for serious cybersecurity incidents. In our study, experts recounted staff admitting, “I didn’t know, nobody told me,” or using personal Gmail for sensitive communications. One cybersecurity expert observed, “[Healthcare ecosystems] are not good at these things [data protection by design]. We say that they don’t bake in security; they just bake the cake and spray on some [cyber]security.”

Our mixed-methods study, published in Behaviour & Information Technology (open access explored these critical vulnerabilities in health data protection. We gathered insights from cybersecurity and privacy experts across 14 countries, including CISOs, IT security officers, researchers, privacy managers, and Data Protection Officers.

We identified 30 failure factors and, using the People-Process-Technology framework, unpacked the top seven:

  • People: non-compliant behaviour, and lack of cybersecurity awareness
  • Process: inadequate risk management, weak data integrity monitoring, and a lack of breach response and recovery plans
  • Technology: unsecure third-party applications, and a lack of data protection by design

These factors often interlink, creating complex vulnerabilities. With the growing adoption of big data analytics and AI in healthcare, understanding these failure points is crucial. Our model offers actionable insights for healthcare organisations to strengthen data protection, develop mitigation policies, and reduce the risk of breaches, ensuring safer care and maintaining trust.

Towards a model for understanding failures in health data protection: a mixed-methods study, Javad Pool, Saeed Akhlaghpour, Farkhondeh Hassandoust, Farhad Fatehi & Andrew Burton-Jones

AI Horizons – Conversations with Australia’s leading and emerging researchers

On 22 September, CIRES Centre Director Prof. Shazia Sadiq FTSE hosted AI Horizons – Conversations with Australia’s leading and emerging researchers, an inspiring event organised by the Australian Academy of Technological Sciences & Engineering (ATSE). The event brought together brilliant minds in AI—from established experts to rising stars—to explore the future of artificial intelligence in Australia.

Speakers: Dr Sue Keay FTSE UNSW AI Institute, Dr Scarlett Raine & Prof. Michael Milford FTSE QUT Centre for Robotics, & Hung Lee & Distinguished Prof. Svetha Venkatesh FTSE FAA Deakin University.

Watch on YouTube: AI horizons – Conversations with Australia’s leading and emerging researchers

 

 

Research Insight: New Approach for Irregular Time Series in Healthcare AI

EMIT: A New Approach for Irregular Time Series in Healthcare AI

Excited to share our paper “EMIT: Event-Based Masked Auto Encoding for Irregular Time Series” published at ICDM 2024. Together with A/Prof. Sen WANG, Dr Ruihong Qiu, A/Prof. Adam Irwin and Prof. Shazia Sadiq, we explore how irregular time series (like vital signs and lab results recorded at uneven intervals) challenge existing AI models and how our proposed framework, EMIT, improves clinical decision support through better representation learning. Special thanks to CIRES, Queensland Health and The University of Queensland for supporting this research.

Read full paper at https://arxiv.org/pdf/2409.16554

 

Our Approach
We introduce EMIT, a pretraining framework based on transformer architecture, tailored for irregular clinical time series data. EMIT learns by:

  • Finding important points in irregular time series
  • Pretraining by masking and predicting those points
  • Use the pretrained model for any downstream task (e.g., outcome prediction)

Key Findings

Improved Representation Learning: EMIT captures important variations without losing timing information, outperforming generic pretext approaches for irregular time series.

Data Efficiency: On benchmark healthcare datasets (MIMIC-III & PhysioNet Challenge 2012), EMIT achieved strong results using only 50% of labeled data, reducing reliance on costly annotations.

Task Relevance: By designing pretext tasks specific to irregular time series, EMIT delivers more reliable clinical predictions compared to standard forecasting approaches.

How can we design AI that adapts to the messy, irregular reality of clinical data while still delivering trustworthy predictions?

Made in Australia – Our AI Opportunity

On 22 August, the Australian Academy of Technological Sciences & Engineering (ATSE) released Made in Australia – Our AI Opportunity, a bold action statement co-authored by CIRES Centre Director Shazia Sadiq and CIRES Strategy Board Member Sue Keay, calling for strategic investment in sovereign AI capability. The report proposes a mission-based approach, including the creation of AI factories—regional hubs for talent, research, and innovation—to ensure Australia’s position as a global leader in safe, sustainable, ethical, and high-impact sovereign AI.

Read the Made in Australia AI Action Statement.

Artificial intelligence (AI) is radically reshaping work, education and security in Australia, and is officially recognised as a critical technology in government policy. How we harness it will impact the nation’s economic prosperity, national security and continuing innovation.

The global race to build AI capabilities is accelerating, and it is incumbent on us to harness our comparative advantages and secure control of our data and digital systems. Without timely and comprehensive public and private investments in sovereign AI capability, Australia runs the risk of becoming dependent on foreign technology providers with their own commercial and national interests.

Australia already has the ingredients to develop sovereign AI capability, and is ready to leverage these, with appropriate government leadership and investment. ATSE proposes a mission-based approach, with AI factories located across Australia: the jewel in the AI crown around which talent and partnerships will develop. This statement outlines how targeted investment in a strong national AI capability can position Australia as a global leader in safe, sustainable, ethical and high-impact sovereign AI. It shows how these investments will give us the autonomy we want as a nation whilst enhancing productivity and prepare the nation for future transitions in manufacturing and knowledge work, unlocking value across the entire economy.

This statement builds on ATSE’s 2022 vision statement, Strategic Investment in Australia’s Artificial Intelligence Capacity.

CIRES is 4!

On 20 August 2025 we celebrated our Centre’s 4th birthday with UQ and Swinburne colleagues, and four years of research collaboration, impactful partnerships, and a growing a community dedicated to building a more resilient, inclusive, and ethical digital future. Since our launch in 2021, CIRES has:

  • Delivered cutting-edge research in human-centred AI and information resilience
  • Fostered strong collaborations across academia, industry, and government
  • Supported the next generation of researchers and innovators
  • Helped shape national conversations on responsible technology

We reviewed our 2021-2025 YTD performance stats (see pics), and after that effort, we thought we definitely deserved two cakes to celebrate.

From our Director, Professor Shazia Sadiq FTSE: “CIRES was founded with a bold vision — to reduce socio-technical barriers to data driven transformation. Four years on, I’m proud of how far we’ve come and grateful to our team and collaborators who continue to pursue our mission of Information Resilience.”

We’re proud of what we’ve achieved — and even more excited about what’s ahead, including our first PhD graduates. Thank you to our researchers, partners, and supporters who have been part of this journey.

 

Research Insight: Personally Identifiable Information (PII)

CIRES PhD Researcher Pa Pa Khin travelled to Canada in August to present at AMCIS 2025 on the challenges industries face in identifying and managing Personally Identifiable Information (PII) within Systems of Engagement. It was a great opportunity to emphasise the importance of controlling sensitive and vital information, which we share and use informally, ad hoc, or formally across diverse collaboration and communication systems, and its value creation. Pa Pa’s work introduces a foundational framework, a locus for control with five key elements in place.

“I am happy to share that our paper, “From Chaos to Clarity: Identifying and Managing Personally Identifiable Information in Systems of Engagement”, was presented at the Americas Conference on Information Systems (AMCIS) 2025 organised by the Association for Information Systems. Together with A. Prof Paul Scifleet, we explore the significant challenges organisations face in identifying and managing Personally Identifiable Information (PII) within Systems of Engagement, as described in the current industry discourse. Based on our findings, we develop a locus for control for sensitive and vital information management with five key elements in place (i) the identification and location of information assets, (ii) their traceability, (iii) protection and security, (iv) compliance and governance, (v) use and value creation.”

Thanks to CIRES and our industry partner Astral for supporting this work. 

Daisy Xu 2025 DeSanctis Award Winner

A huge congratulations to our CIRES PhD Researcher, Daisy Xu, who is the 2025 DeSanctis Award Winner presented by the Communication, Digital Technology, and Organization (CTO) division of the Academy of Management. This award recognizes outstanding scholarship in the area of communication and digital technology, specifically for a solo-authored conference paper based on a recent dissertation.

Research Insight: Knowledge Tracing

Congratulations to our PhD researcher Mehrnoush Mohammadi who recently presented at the 26th International Conference on Artificial Intelligence in Education (AIED2025) 22-26 July 2025 in Palermo, Italy. This year’s conference theme was AI as a catalyst for inclusive, personalised, & ethical education, to empower teachers & students for an equitable future. Full details and link to paper below.

“I had the opportunity to present a poster our accepted paper: “Knowledge Tracing with A Temporal Hypergraph Memory Network“.

Research Spotlight: This work presents THMN, a Temporal Hypergraph Memory Network, a hybrid Knowledge Tracing model that combines memory-augmented networks with temporal hypergraph reasoning to capture dynamic, high-order concept interactions over time. By modeling how a student’s understanding of concepts shifts across diverse question contexts and scaling updates based on practice diversity, THMN delivers composition-aware, interpretable predictions and consistently outperforms state-of-the-art KT models across four benchmark datasets.

It was an incredible experience connecting with researchers, exchanging ideas, and sharing our work with the global AI in Education community.

Special thanks to my amazing co-authors Dr.Kamal Berahmand, CIRES Centre Director Prof. Shazia Sadiq, and CIRES Chief Investigator, Dr. Hassan Khosravi, for their incredible collaboration, and to the AIED community for the warm welcome and insightful feedback.”

Kingston AI Group plans continuing advocacy for Australia

On 24th July, CIRES Centre Director, Professor Shazia Sadiq and the UQ Centre for Enterprise AI hosted the Kingston AI Group meeting at the The University of Queensland’s St Lucia campus.

The meeting allowed members to come together and strategise their advocacy for Australian AI sovereign capability, discuss emerging AI fields, and identify the clearest, most effective ways to drive the group’s engagement and advocacy with the federal government, including the Prime Minister.

Mike Bareja, Director of Digital Technologies, AI, Cyber and Future Industries at the Business Council of Australia (BCA) was in attendance, leading an informative discussion on BCA’s take on the role of AI in Australia’s corporate landscape. He also provided an overview of “Accelerating Australia’s AI Agenda,” a BCA report released in June that has been enthusiastically endorsed by the Kingston AI Group.

Much of the meeting was spent discussing AI’s impact on the economy as well as ways to increase investment in AI R&D in Australia. Members also discussed the importance of protecting Australian culture and values in AI, and increasing awareness of Australia’s burgeoning AI industry while raising the skills and capabilities of those working within it.

To read more, please visit ➡️ https://lnkd.in/ggpAkM-Z

Photo caption: Participants and visitors to the 24 July Kingston AI Group meeting in Brisbane, (L-R) Dr Nisha Schwarz, PhD, Dr Rocky Chen, Prof Shazia Sadiq, Dr Joel Mackenzie, Dr Kathy Nicholson, Prof Shane Culpepper, Dr Sue Keay, Dr Paul Dalby PhD GIA (Affiliated), Prof Michael Milford, Prof Simon Lucey, BCA’s Mike Bareja, Prof Anton van den Hengel, Prof Benjamin Rubinstein, Prof Stephen Gould, Prof Ian Reid, Prof Ajmal Mian, and Prof Marta Indulska at the Kingston AI Group meeting 24 July in Brisbane.

Not pictured but in attendance: Prof Joanna Batstone, PhD, Prof Dana Kulic, and Prof Toby Walsh FAA FTSE FRSN