Abstract
This paper proposes a reconceptualization of human-machine collaboration (HMC) within governance, highlighting its role as a sociotechnical infrastructure for producing, mediating, and legitimizing knowledge and authority. Our purpose is to critically examine how AI-driven design decisions in governance settings embed normative assumptions regarding decision-making agency, institutional power, and epistemic legitimacy. Employing an interdisciplinary methodology that integrates cognitive science with insights from Science and Technology Studies (STS), we analyze key design elements (e.g. automation levels, interface architectures, and bias mitigation protocols) through the lens of knowledge politics. We illustrate our argument through case studies from real-world public-sector applications of AI, including decision-support systems and human-AI teaming environments, examining how these systems shape not only outcomes but also institutional authority and knowledge ownership. As an outcome, we introduce a novel “governance-aware,” human-centered design framework emphasizing reflexivity, transparency, and democratic contestability. Our approach extends existing work in explainable AI (xAI) and AI ethics by explicitly connecting cognitive biases, politics of knowledge, and system architectures.
Presenters
Dirk Van RooyProfessor, Antwerp Centre for Responsible AI (ACRAI), University of Antwerp, Belgium
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2026 Special Focus—Human-Centered AI Transformations
KEYWORDS
HUMAN-MACHINE COLLABORATION, HUMAN-CENTERED AI, KNOWLEDGE POLITICS, GOVERNANCE, DEMOCRATIC ACCOUNTABILITY, EXPLAINABLE