GenAI Uses and Challenges in Society

Work thumb

Views: 21

All Rights Reserved

Copyright © 2025, Common Ground Research Networks, All Rights Reserved

Abstract

Generative Artificial Intelligence (GenAI) is transforming professional, educational, and societal domains, yet its adoption remains uneven, particularly among marginalized and underrepresented groups. This study addresses two core questions: (1) How does GenAI adoption differ across social and demographic groups, and what role does digital literacy play? (2) How can culturally responsive and explainable AI design foster trust, accessibility, and ethical use? Using a mixed-method survey of 542 participants, the study evaluated two hypotheses: first, that adoption rates are lower among marginalized groups due to limited infrastructure and digital literacy; and second, that culturally adaptive, transparent AI systems increase trust and equitable usage. Findings support both hypotheses. Participants aged 26 to 35 with higher education levels and digital fluency were more likely to use GenAI for professional purposes such as data analysis and content generation. In contrast, individuals from lower-income or less-educated backgrounds reported limited access, lower confidence, and heightened ethical concerns. Statistical analysis confirmed a strong correlation between digital literacy and trust in GenAI, while perceived accessibility significantly predicted usage. Ethical and cultural concerns—especially in healthcare, education, and public-sector contexts—emphasized the importance of transparency, explainability, and bias mitigation. This study underscores the urgent need for inclusive GenAI strategies that prioritize equitable access, digital literacy development, and culturally sensitive, explainable design. The findings offer practical insights for global policymakers, developers, and educators working toward responsible and inclusive GenAI integration.