Academic Integrity and AI Content Moderation: Challenges for Historical Research

Abstract

GenAI is rapidly becoming embedded in research, teaching, and public engagement, yet its role in shaping scholarly discourse remains contested. Of particular concern are GenAI systems’ content moderation mechanisms, which, while designed to prevent harm, often intervene in ways that distort or constrain academic inquiry. This paper explores how such interventions affect historical research and the core values of academic integrity. As a historian working on sensitive topics such as genocide studies, I have encountered striking examples of GenAI ‘censorship’: terms central to scholarship flagged as inappropriate, digitisation processes prematurely halted, and historical representations altered or withheld. These incidents reveal not only the fragility of GenAI as a research tool but also the risks of allowing opaque corporate logics to filter access to knowledge. This paper adopts an interpretive, case-based approach. Drawing on documented encounters with GenAI systems, I analyse the implications of content moderation for honesty, trust, fairness, and responsibility in academic practice. Rather than presenting restrictions as isolated glitches, I interpret them as symptoms of deeper tensions: between protection and academic freedom, between global North–dominated development practices and diverse scholarly needs, and between efficiency and integrity in research. I argue that humanities perspectives are essential to navigating these tensions. By foregrounding nuance, contextual understanding, and ethical clarity, humanities scholars are uniquely positioned to contribute to GenAI governance and ensure that moderation policies safeguard — rather than undermine — scholarly inquiry.

Presenters

Lornawaddington Waddington
Associate Professor, History, University of Leeds, United Kingdom

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

Beyond Borders: The Role of the Humanities in Reimagining Communities

KEYWORDS

GenAI, Humanities, Censorship, Bias