Generative AI and the Future of Language Classrooms

Over the past decades, those of us interested in computer-assisted language learning have repeatedly seen new technologies arrive with promises to transform language education. From early interactive grammar exercises to multimedia CD-ROMs, from learning management systems to mobile apps, each sparked both excitement and trepidation. Generative artificial intelligence (GenAI), however, is different. The sudden arrival of large language models and their chatbots into everyday life in late 2022 did not just add another teaching and learning tool but set in motion a fundamental change of the environment in which teachers teach and learners learn.

Language classroom in 2033, as imagined by ChatGPT

ChatGPT 5 imagined this language classroom in 2033

Public discourses have reflected this sense of rupture. Media headlines made utopian promises – AI as patient tutor, instantaneous translator, or personal coach – or raised dystopian warnings of cheating, job loss, or cultural and cognitive decline. University administrators and school boards scramble to update policies, while journalists speculate about the “death of the essay” or the “end of second-language learning.” These narratives are dramatic, but they miss an important point. They miss what matters to many teachers and students: the daily reality in which they are working with and, at times, against a very complex and powerful technology in their classrooms and in their lives. Teachers must make timely and important decisions: whether and how to allow or ban GenAI use in learning activities and especially in graded assignments; how to talk to students about plagiarism and authorship; how to redesign assessment; and how to build critical AI literacy. In staff rooms, professional development workshops, and teacher networks, conversations are often pragmatic: Which prompts work best for this language-learning activity? How do we prevent overreliance? How do we foreground human interaction, communication, and thinking?

This is a draft of my foreword for a book that has now come out:

Louise Ohashi, Mary Hillis, & Robert Dykes (Eds.) Artificial intelligence in our language learning classrooms. Candlin & Mynard ePublishing.

https://www.candlinandmynard.com/genai1.html

This book tackles these questions. It does not treat generative AI as an external force to be admired or feared from a distance. Instead, it examines how GenAI is already being used in classrooms and to the benefit of and in collaboration with the students. Its chapters speak to classroom practice, to pedagogy, and to the professional and ethical responsibilities of teachers. From my perspective as someone who has long been interested in the intersection of technology, language, and education, this is precisely what we language teachers need. What matters is whether teachers can integrate these tools without losing sight of the social, cultural, and emotional dimensions of language and of learning.

By grounding the discussion in theory, research, and classroom experience, the book provides what teachers most need: different perspectives, clear guidance, and thoughtful reflection. Within this broad focus, all book chapters foreground teacher agency. Public discourses sometimes frame educators as passive, either as victims of a disruptive technology or as gatekeepers tasked with policing it. In this book, teachers are shown as active participants: experimenting with GenAI in their classrooms, guiding learners in prompt design, encouraging reflection, embedding AI literacy into their pedagogy, … This emphasis is crucial. If GenAI is to have a useful place in language education, it must be under teacher control and be shaped by pedagogical priorities that, in turn, are rooted in both educational principles and technological awareness.

Reading across the chapters, one finds a sense of the broader ecology in which language education now takes place. Generative AI is not an add-on; it reshapes the communicative environment itself. Learners increasingly write, read, and converse in contexts where GenAI is ubiquitous. Teachers, therefore, cannot simply teach “around” AI; they must teach “with” and “about” it. That means equipping students not only to use AI tools effectively, but also to critique them and to understand their both their capabilities and limitations.

As I read this book’s contributions, I was reminded of an important lesson: technology can disrupt pedagogy and education, but it does not determine how and what we teach. It is always teachers, working with learners in real contexts, who determine whether a tool becomes a crutch, a distraction, or a catalyst for learning. This book exemplifies that spirit. It offers new ideas, outlines paths for further inquiry, and sharpens the (empirical and theoretical) lens for teacher reflection.  It shows how generative AI can be questioned, adapted, and contextualized rather than either blindly adopted or hastily rejected.

When you read this book, my hope is that you will also come away not only with new ideas for classroom practice, but with renewed confidence in the changing role of teaching and teacher. Generative AI may be unprecedented in its large scope and powerful capabilities, but the fundamental task remains the same: to create environments where students learn to use language meaningfully and comfortably and develop their empathy for other people and peoples, for their customs and cultures. The chapters that follow offer help with this task.

Book cover: Artificial Intelligence in Our Langauge Learning Classrooms By Ohashi, Hillis, and Dykes (eds.)

Language Learning and AI: 7 lessons from 70 years (conclusion)

Seven Lessons

There has always been some interaction between AI and language and learning for the last 70 years. In computer-assisted language learning (CALL), people have worked on applying AI – and they called it ICALL – for almost 50 years. For GenAI, what can we learn from these efforts of working with good old-fashioned AI for such a long time?

Photo by Julia M Cameron on Pexels.com
My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.

In conclusion, we will recapitulate and condense the seven lessons that we can learn from ‘good old-fashioned AI’ and ICALL with its declarative knowledge, engineered algorithms, and symbolic NLP and see how they can be applied to GenAI with its machine-learnt complex artificial neural networks.

  1. Exposure to rich, authentic language
    GenAI is capable of providing ample exposure to rich language just in time, on the right topic, and at the right level. Generated texts consist of mostly accurate language forms and are plausible, so that they lend themselves to an interpretation in context by the students. This gives such a text an authentic feel. Here GenAI compares very well to the limited linguistic scope of ICALL systems.
  2. Communication in context
    GenAI, also because of the comprehensive coverage of the LLMs, can sustain conversations with learners on different topics. Its natural language understanding is such that it can take into consideration prior textual context, making any conversation more natural. This was impossible with ICALL systems and chatbots of the past. However, teachers and students need to be aware that they are communicating with a machine, a stochastic parrot (Bender et al., 2021). This requires informed reflection on a new form of communication and learning, to avoid the anthropomorphizing of machine and its output.
  3. Appropriate error correction and contingent feedback
    This is the area where we can learn most from ICALL and tutorial CALL. Especially with giving metalinguistic feedback, GenAI has too many shortcomings. Researchers need to explore how the automatic error correction, which happens frequently, impacts aspects of language learning such as noticing.
  4. Varied interaction in language learning tasks
    This is the area where we have many new opportunities to explore, although we can take inspiration particularly from projects in ICALL and game-base language learning. GenAI is most suitable as a partner in conversation and learning.
  5. Recording learner behavior and student modeling
    Student modeling has a long tradition – not just in ICALL – in AI and education. GenAI tools by themselves are that – tools and not tutors. They can be embedded in other learning systems, but they cannot be used as virtual tutors, because their information about learners and the learning context are serendipitous at best.
  6. Dynamic individualization
    GenAI provides teachers and students with an individual experience with generated texts of high quality.  The adaptive instruction (Schulze et al., 2025 in press), however, which has been an ambition of ICALL research, has not yet been achieved. Broader research and development in AI, beyond GenAI, is still necessary to achieve dynamic individualization in what can truly be termed ICALL.
  7. Gradual release of responsibility
    Since the instructional sequences, pedagogical approaches, and teaching methods are not present in GenAI, teachers need to design the use of GenAI as one of the tools in the learning process carefully. Teachers must not render the control of curricular and pedagogical decisions about activity design, learning goals, lesson contents, and learning materials to the machine.

GenAI, due to its powerful LLMs, has lifted AI in language education to a new quality. Such a disruptive technology shows great promise, provides many additional opportunities, and poses some challenges for teachers, students, and researchers alike.

Language Learning and AI: 7 lessons from 70 years (#7)

7. Gradual release of responsibility

Instructional sequences and other learning processes are structured according to pedagogical guidelines and principles and specific teaching methods. For reasons of brevity, we chose one commonly employed method – the gradual release of responsibility (Fisher & Frey, 2021). In an instructional sequence, the responsibility for the process and its outcomes is shifted from the teacher to the learner. Starting with Focused Instruction (I do it) and moving to Guided Instruction (We do it), more and more responsibility is transferred to the student in the latter two phases Collaborative Learning (You do it together) and Independent Learning (You do it alone). It is mainly the locus of control that shifts gradually from the teacher to the learner.

With this one, all seven lessons have been prepared. All parts are based on a manuscript for a book chapter that I wrote recently. Prepping lesson #7.
Mother helping child operate a washing machine
Photo by PNW Production on Pexels.com

If the sequences of learning activities and the algorithm for guidance and feedback are hardwired in the system and hardly adapt to an individual learner and their behavior – as was the case in most ICALL and in tutorial CALL in general – then the control of processes is largely with the machine. To put it polemically, the learner’s choices are limited to using the ICALL or tutorial app or not. At first sight, this is different with GenAI. Learners can request specific texts and then request something different. Everything can be translated from one language into another, all questions will get an answer – it might not be correct – and all prompts get a reply. The student decides what and how much will be generated at what time. The generation is fast and often faster than most humans can type. This means that the locus of control is largely with the learner in this respect.

My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.

The GenAI controls the generation process. The many hidden layers of the ANN mean that how the GPT transforms the input, for example the prompt, to the output the learner can read, for example an answer to a question. The problem here is for learner to be able to learn, they need to be able trust the truth value and relevance of text they received. Since the GPT with its LLM remains impenetrable even for the computer scientists who ran the deep (machine) learning to train the model and thus the artificial neural network due its enormous complexity, it is almost impossible to check the generated text output within the system. Currently, because all GenAI users are new users, teachers and students can rely on previously learned information – information that was not generated by a GenAI – to compare the output they received to what they know already. However, one can conduct a thought experiment already: if we learn more and more from generated texts, then we have less and less prior ‘independent’ information that we can use to check the GenAI output for errors …

The more immediate conundrum is the trust all learners need to put into information they are being taught and do not know (and thus cannot check easily). Because of their institutionalized power and prior training and accreditation, teachers normally get the trust of their students; students trust the information they are taught. Especially during the phase of Focused Instruction, if this instruction is given via GenAI generated text, learners do not know how much trust they can place in the information they obtain from the text. Here again, it is the responsibility of the teacher to control the process and, if need be, check the taught information. This means that the gradual release of responsibility from the teacher to the student must be almost parallel to the ‘release of responsibility’ from the teacher to the machine. Whereas an ICALL ITS was a rigid and often limited ‘tutor’, GenAI must not replace the human teacher and can only be useful as the learning partner in the third phase of Collaborative Learning (You [learner and GenAI] do it together) and as a helper in the Guided Instruction phase (You [teacher, GenAI, and learner] do it) with the teacher in the lead. It appears that the current GenAI does not have a role in the individual teacher phase (Focused instruction [I do it]) nor in the individual student phase (independent Learning [You do it]). Teachers should not abdicate their role in the initial teaching of new material; and students cannot have their independent learning done for them by a machine.

To be concluded …

References

Fisher, D., & Frey, N. (2021). Better Learning Through Structured Teaching: A Framework for the Gradual Release of Responsibility (3rd ed.). ASCD.