2. Communication in context
Oxford (1993) desired that “communicative competence must be the cornerstone of ICALL” (p. 174), noting that many ICALL projects of her time did not meet that goal, although communication and by extension communicative language teaching have been central ideas in applied linguistics for decades. Canale and Swain (1980) transferred the concept of communicative competence by Dell Hymes – developed in opposition to the Chomskyan linguistic competence – from sociolinguistics to language learning. Hymes (1974) had introduced communicative competence with the mnemonic SPEAKING = Setting and Scene (time and place), Participants (speaker and audience), Ends (purpose and outcome), Act Sequence (progression of speech acts), Key (tone, manner), Instrumentalities (language modalities), Norms (social rules), and Genre (kind of speech act or text) (pp. 53–62). These different facets and components of communication go well beyond the idea of a generative grammar (Chomsky, 1957) and that of linguistic competence. In ICALL as in NLP generally, however, generative grammar and other formal grammars, which are sets of rules that rewrite strings using mathematical operations, are the backbone of a system. The partial disconnect between formal grammars, such as Head-driven Phrase Structure Grammar and Categorial Grammar, and communicative competence with its focus on meaning, situation, and context, as clearly illustrated also by Hymes’ mnemonic, meant that ICALL systems did hardly play a role in communicative language teaching.

This is the third part of a short series. All parts are based on a manuscript that I wrote recently. Part 0 gives a historical introduction. Lesson 1 focuses on the necessary exposure to authentic language and whether this can be done with GenAI. And I mean exposure and not so-called comprehensible input.
The GenAI chatbots, however, have been said to be a suitable conversation partner (Baidoo-anu & Owusu Ansah, 2023) and learning buddy (https://www.khanmigo.ai/). The GenAI output in a number of languages is certainly well-formed and plausible; GenAI’s natural language understanding is fast and precise. But is a conversation with a chatbot the same as a human conversation? Is it a negotiation of meaning as understood in communicative language teaching? The NLP researcher Emily Bender and her colleagues compared GenAI chatbots to stochastic parrots, which skillfully aim but proceed by guesswork (see Merriam-Webster, n.d.), and argue that “coherence is in fact in the eye of the beholder. Our human understanding of coherence derives from our ability to recognize interlocutors’ beliefs … and intentions … within context … That is, human language use takes place between individuals who share common ground and are mutually aware of that sharing (and its extent), who have communicative intents which they use language to convey, and who model each others’ mental states as they communicate” (Bender et al., 2021, p. 616). In other words, the chatbot spits out forms that are plausible but that do not mean anything; the (student or teacher) reader imbues these hollow forms with meaning and thus anthropomorphizes the GenAI tool, by then reasoning about its ‘intention’ and basing their response on the result of the reasoning, as humans do in conversation. Computers, however, do not have or formulate intentions. Something was clicked, data was input, and a condition was met. This triggered a digital operation, and forms that are numbers to the computer and look like words to the human user of the device became visible or audible. We add meaning after the form of the text has been generated.
My inspiration for this title came from the book
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.
I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025),
Insights into AI and language teaching and learning. Castledown Publishers.
https://doi.org/10.29140/9781763711600-02.
References
Baidoo-anu, D., & Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Journal of AI, 7(1), 52-62.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). https://doi.org/10.1145/3442188.3445922
Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1-47.
Chomsky, N. (1957). Syntactic Structures. Mouton.
Hymes, D. H. (1974). Foundations in Sociolinguistics: An Ethnographic Approach. University of Pennsylvania Press.
Merriam-Webster. (n.d.). Stochastic. In Merriam-Webster’s unabridged dictionary. Retrieved December 30 from https://unabridged.merriam-webster.com/unabridged/stochastic
Oxford, R. L. (1993). Intelligent computers for learning languages: The view for Language Acquisition and Instructional Methodology. Computer Assisted Language Learning, 6(2), 173-188.

