News
Research Insight: Online AI Systems
Delighted to share this research insight and latest work from CIRES PhD researcher Hongliang Niat The University of Queensland, supervised by Chief Investigator Prof. Gianluca Demartini. It proposes an operationalising research framework to enhance the harmlessness of foundation models. Full details below & link to paper: https://lnkd.in/geQrmj3i “Excited to share my recent work, “Operationalising Harmlessness […]
“Excited to share my recent work, “Operationalising Harmlessness in Online AI Systems“, presented at ACM Web Conference 2025! Grateful for the guidance of my supervisor Professor Gianluca Demartini and the support of the ARC Training Centre for Information Resilience (CIRES). This project tackles the growing challenge of ensuring fairness, accountability, and legal compliance in foundation models—especially when operating under black-box constraints or without sensitive user data.
Approach:
► We propose an operationalising research framework for harmlessness, structured around three core questions:
► How do foundation models amplify dataset harm?
► How can we ensure harmlessness under strict black-box settings?
► What proactive explanation strategies meet legal and societal expectations?
Key Insights:
► Even with inclusive training data, foundation models exhibit performance disparities—our framework investigates dataset–model interactions to guide ethical design.
► Traditional fairness methods often require access to sensitive data. We propose Reckoner, a two-stage learning framework that ensures fairness without this requirement.
► We highlight the need for proactive algorithmic accountability, balancing transparency with IP protection and model security.
► Even with inclusive training data, foundation models exhibit performance disparities—our framework investigates dataset–model interactions to guide ethical design.
► Traditional fairness methods often require access to sensitive data. We propose Reckoner, a two-stage learning framework that ensures fairness without this requirement.
► We highlight the need for proactive algorithmic accountability, balancing transparency with IP protection and model security.
Future Directions
► Build compliance-ready harmlessness techniques tailored for real-world AI deployment.
► Enable dual-perspective fairness guidance—supporting both upstream model developers and downstream users.
► Build compliance-ready harmlessness techniques tailored for real-world AI deployment.
► Enable dual-perspective fairness guidance—supporting both upstream model developers and downstream users.
Scan the poster QR code to learn more.
How do you think we can ensure accountable and fair AI under real-world legal constraints?
