New Learning’s Updates
Generative Transpositions: The (Anti-)Grammar of Text-Semantic, CyberSocial Intelligence
Bill Cope and Mary Kalantzis - struggling to understand not only how these things work, but even why they work...
This paper extends our earlier work on the multimodal grammar of artificial intelligence by responding to critical engagements from James Gee, Angel Lin, and John Bateman. We situate generative AI within a longer history of non-trivial machines and cybernetics, emphasizing its distinctively text-semantic character. Unlike logical-symbolic or data-driven approaches, large language models mechanize meaning through abductive next-token prediction, producing what we term an “anti-grammar”: vast empirical mappings of token relations without abstraction, yet yielding emergent reasoning capacities. We analyze how AI processes latent meanings, transposes across modalities, and generates “cybersocial intelligence”—a symbiotic human–machine feedback system rather than an “artificial” replication of mind. While acknowledging the risks of concentrated ownership, epistemic inequalities, and propagandistic knowledge regimes, we also highlight transformative possibilities for learning, creativity, and equity. Our theoretical propositions are complemented by insights from building the CyberScholar platform, exploring ethical, dialogic, and multimodal engagements with generative AI for education and social futures.
- Preprint: Cope, Bill and Mary Kalantzis, "Generative Transpositions: The (Anti-)Grammar of Text-Semantic, CyberSocial Intelligence,” Multimodality and Society, Forthcoming, 2025.
- Full text of preprint here: