Abstract
Artificial intelligence systems are now embedded in how individuals access public health information, including guidance on respiratory infection control. This study investigates how large language models (LLMs)—ChatGPT, Google Gemini, and Microsoft Copilot—respond to prompts concerning UV-C disinfection in indoor environments. The analysis focuses on how these systems construct narratives about disinfection, frame environmental risk, and present evidence within the context of respiratory disease prevention. A structured set of prompts was used to collect responses related to airborne transmission and the use of UV-C light in schools, healthcare settings, and public buildings. Responses were evaluated using a standardized codebook across three dimensions: scientific accuracy, expression of uncertainty or limitations, and relevance to environmental health literacy (EHL). Variation was observed across platforms in source attribution, technical clarity, and inclusion of contextual safety information. Some responses aligned with evidence-based public health guidance, while others omitted critical risk details or presented generalized claims without citation. Differences in risk framing and information completeness suggest that LLMs mediate not only access to knowledge but also shape public perception of health technologies. This study contributes to environmental health science and digital media theory by providing an empirical framework to evaluate AI-generated health communication. It highlights how algorithmic systems participate in the construction of authority and meaning in digitally mediated risk discourse. The findings offer insight into the epistemic role of AI in public health and its implications for equitable, evidence-based communication in digital culture.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
AI-GENERATED HEALTH INFORMATION, ENVIRONMENTAL HEALTH LITERACY, DIGITAL KNOWLEDGE AUTHORITY