Abstract
This paper examines whether large language models (LLMs) reproduce “affective stickiness” (Ahmed, 2014), the tendency of emotions such as fear, pity, or suspicion to adhere to racialized and gendered bodies, when generating narratives about Muslim women. While existing audits of AI bias often use isolated prompts, this study situates testing within the narrative contexts of the Nordic political thrillers Caliphate (Sweden, 2020) and Bullets (Finland, 2018). I ask how emotions attach to bodily cues, speech acts, and objects surrounding Muslim-identified female characters, and whether these attachments shift when identity markers are changed or removed. Two complementary experiments combine narrative prompting, attribution analysis, and sentence-embedding comparison (SBERT). In the first, identical scenes featuring different character identities are extended by ChatGPT, LLaMA, and Mistral to reveal how identity influences emotional tone. In the second, models describe what stands out about characters in existing scenes, allowing comparison between explicit attributions and generative behavior, supported by cosine similarity across affective embeddings. Preliminary findings indicate that LLMs consistently associate fear and suspicion with Muslim-identified characters, especially through carriers such as hijab or bag, while similar objects linked to Nordic women remain neutral. These results suggest that affective bias persists even under content-safety constraints. By adapting affect theory for computational analysis, this study offers a new, scene-based framework for reading how AI systems reproduce or reconfigure emotional patterns embedded in cultural media. It connects narrative humanities and model auditing, showing how stereotypes endure not only in words but in affective attachments that shape AI storytelling.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
AFFECTIVE STICKINESS, LLMS, STEREOTYPES, EMBEDDING ANALYSIS, CULTURAL BIAS