High-Stakes Public AI and Institutional Accountability
Auditing public-sector AI systems to identify asymmetries in accountability, ensuring institutional transparency aligns with the lived experiences of those subject to algorithmic decisions.
Outcome
FAccT 26, CanadianAI 26, CHI 25, and Submitted
Overview
As governments increasingly adopt AI for essential services like immigration, housing, education, and social welfare, this project investigates the growing gap between institutional accountability claims and the lived experiences of those subject to algorithmic decisions.
Approach
We conduct ethnographic case studies (e.g., Toronto’s homelessness services) and analyze institutional documentation, such as public AI registers and Algorithmic Impact Assessments from the Government of Canada. This project seeks to understand how accountability is represented. Simultaneously, we analyze interviews with the frontline workers and the peer discourse on online platforms to understand how individuals, who enforce or experience those algorithmic decisions, collectively sense-make and respond to opaque system outcomes.
Key Contributions
Our work identifies “bureaucratic silences” in official registers, where technical descriptions overlook the sociotechnical context of high-stakes systems. We identified a “transnational asymmetry” in public sector AI systems that fails to account for how systems are experienced by individuals situated across or coming from different geopolitical locations. We have documented how algorithms can flatten context-rich inputs, leading to a reductionist “datafication” that complicates frontline support. These findings refit accountability as a distributed experiential phenomenon rather than just a procedural one.