Language Learning and AI: 7 lessons from 70 years (conclusion)

Seven Lessons

There has always been some interaction between AI and language and learning for the last 70 years. In computer-assisted language learning (CALL), people have worked on applying AI – and they called it ICALL – for almost 50 years. For GenAI, what can we learn from these efforts of working with good old-fashioned AI for such a long time?

Photo by Julia M Cameron on Pexels.com
My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.

In conclusion, we will recapitulate and condense the seven lessons that we can learn from ‘good old-fashioned AI’ and ICALL with its declarative knowledge, engineered algorithms, and symbolic NLP and see how they can be applied to GenAI with its machine-learnt complex artificial neural networks.

  1. Exposure to rich, authentic language
    GenAI is capable of providing ample exposure to rich language just in time, on the right topic, and at the right level. Generated texts consist of mostly accurate language forms and are plausible, so that they lend themselves to an interpretation in context by the students. This gives such a text an authentic feel. Here GenAI compares very well to the limited linguistic scope of ICALL systems.
  2. Communication in context
    GenAI, also because of the comprehensive coverage of the LLMs, can sustain conversations with learners on different topics. Its natural language understanding is such that it can take into consideration prior textual context, making any conversation more natural. This was impossible with ICALL systems and chatbots of the past. However, teachers and students need to be aware that they are communicating with a machine, a stochastic parrot (Bender et al., 2021). This requires informed reflection on a new form of communication and learning, to avoid the anthropomorphizing of machine and its output.
  3. Appropriate error correction and contingent feedback
    This is the area where we can learn most from ICALL and tutorial CALL. Especially with giving metalinguistic feedback, GenAI has too many shortcomings. Researchers need to explore how the automatic error correction, which happens frequently, impacts aspects of language learning such as noticing.
  4. Varied interaction in language learning tasks
    This is the area where we have many new opportunities to explore, although we can take inspiration particularly from projects in ICALL and game-base language learning. GenAI is most suitable as a partner in conversation and learning.
  5. Recording learner behavior and student modeling
    Student modeling has a long tradition – not just in ICALL – in AI and education. GenAI tools by themselves are that – tools and not tutors. They can be embedded in other learning systems, but they cannot be used as virtual tutors, because their information about learners and the learning context are serendipitous at best.
  6. Dynamic individualization
    GenAI provides teachers and students with an individual experience with generated texts of high quality.  The adaptive instruction (Schulze et al., 2025 in press), however, which has been an ambition of ICALL research, has not yet been achieved. Broader research and development in AI, beyond GenAI, is still necessary to achieve dynamic individualization in what can truly be termed ICALL.
  7. Gradual release of responsibility
    Since the instructional sequences, pedagogical approaches, and teaching methods are not present in GenAI, teachers need to design the use of GenAI as one of the tools in the learning process carefully. Teachers must not render the control of curricular and pedagogical decisions about activity design, learning goals, lesson contents, and learning materials to the machine.

GenAI, due to its powerful LLMs, has lifted AI in language education to a new quality. Such a disruptive technology shows great promise, provides many additional opportunities, and poses some challenges for teachers, students, and researchers alike.

Language Learning and AI: 7 lessons from 70 years (#7)

7. Gradual release of responsibility

Instructional sequences and other learning processes are structured according to pedagogical guidelines and principles and specific teaching methods. For reasons of brevity, we chose one commonly employed method – the gradual release of responsibility (Fisher & Frey, 2021). In an instructional sequence, the responsibility for the process and its outcomes is shifted from the teacher to the learner. Starting with Focused Instruction (I do it) and moving to Guided Instruction (We do it), more and more responsibility is transferred to the student in the latter two phases Collaborative Learning (You do it together) and Independent Learning (You do it alone). It is mainly the locus of control that shifts gradually from the teacher to the learner.

With this one, all seven lessons have been prepared. All parts are based on a manuscript for a book chapter that I wrote recently. Prepping lesson #7.
Mother helping child operate a washing machine
Photo by PNW Production on Pexels.com

If the sequences of learning activities and the algorithm for guidance and feedback are hardwired in the system and hardly adapt to an individual learner and their behavior – as was the case in most ICALL and in tutorial CALL in general – then the control of processes is largely with the machine. To put it polemically, the learner’s choices are limited to using the ICALL or tutorial app or not. At first sight, this is different with GenAI. Learners can request specific texts and then request something different. Everything can be translated from one language into another, all questions will get an answer – it might not be correct – and all prompts get a reply. The student decides what and how much will be generated at what time. The generation is fast and often faster than most humans can type. This means that the locus of control is largely with the learner in this respect.

My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.

The GenAI controls the generation process. The many hidden layers of the ANN mean that how the GPT transforms the input, for example the prompt, to the output the learner can read, for example an answer to a question. The problem here is for learner to be able to learn, they need to be able trust the truth value and relevance of text they received. Since the GPT with its LLM remains impenetrable even for the computer scientists who ran the deep (machine) learning to train the model and thus the artificial neural network due its enormous complexity, it is almost impossible to check the generated text output within the system. Currently, because all GenAI users are new users, teachers and students can rely on previously learned information – information that was not generated by a GenAI – to compare the output they received to what they know already. However, one can conduct a thought experiment already: if we learn more and more from generated texts, then we have less and less prior ‘independent’ information that we can use to check the GenAI output for errors …

The more immediate conundrum is the trust all learners need to put into information they are being taught and do not know (and thus cannot check easily). Because of their institutionalized power and prior training and accreditation, teachers normally get the trust of their students; students trust the information they are taught. Especially during the phase of Focused Instruction, if this instruction is given via GenAI generated text, learners do not know how much trust they can place in the information they obtain from the text. Here again, it is the responsibility of the teacher to control the process and, if need be, check the taught information. This means that the gradual release of responsibility from the teacher to the student must be almost parallel to the ‘release of responsibility’ from the teacher to the machine. Whereas an ICALL ITS was a rigid and often limited ‘tutor’, GenAI must not replace the human teacher and can only be useful as the learning partner in the third phase of Collaborative Learning (You [learner and GenAI] do it together) and as a helper in the Guided Instruction phase (You [teacher, GenAI, and learner] do it) with the teacher in the lead. It appears that the current GenAI does not have a role in the individual teacher phase (Focused instruction [I do it]) nor in the individual student phase (independent Learning [You do it]). Teachers should not abdicate their role in the initial teaching of new material; and students cannot have their independent learning done for them by a machine.

To be concluded …

References

Fisher, D., & Frey, N. (2021). Better Learning Through Structured Teaching: A Framework for the Gradual Release of Responsibility (3rd ed.). ASCD.

Language Learning and AI: 7 lessons from 70 years (#6)

6. Dynamic individualization

Even though a GenAI is not an ITS, as some ICALL systems were, can it consider and appropriately respond to individual learner differences (Dörnyei, 2006)? On the one hand, the limits of appropriate corrective feedback GenAIs can give curtail the possibilities for individualized help learners receive. On the other, the probabilistic, nonlinear approach and other (hidden) traits of LLMs mean that the experience of text generation is unique to each user (Wolfram, 2023, February 14). In other words, the same prompt put in twice will normally generate (at least slightly) different texts. This is a feature of GPTs because they contain slight distortions to make their generated text more human-like. Texts can also be generated using different voices, styles, and registers as well as for different language proficiency and readability levels. Thus, providing an individualized textual experience is a strength of GenAIs that are based on LLMs.

My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.
Photo by Vitaly Gariev on Pexels.com
With this one, six of the seven lessons have been prepared. Here is a quick list of what I posted in this context before.

historical introduction
Lesson #1: exposure to authentic language
Lesson #2: communication in context
Lesson #3: interaction in language learning with GenAI
Lesson #4: appropriate error correction and contingent feedback
Lesson #5: Recording learner behavior and student modeling

As discussed above, LLMs are machine-learnt ANNs, which were trained on a very large number of texts from the internet. They did not gain ‘their experience’ with a large group of students who they got to know over the years. This is what teachers do, and this is what student models contribute in an AI system. GPTs are not meant or designed to function as an intelligent tutoring system, because they have little to no information about the individual student, planned instructional sequences, and the curricular context of an activity or lesson. Information about an individual student is stored in a student profile. Virtual learning environments and quiz tools, for example, store scores, time on task, resources accessed, etc. in the system. This structured information can be used in a student model of an intelligent language tutoring system to ‘reason’ about the student’s learning and language beliefs, which then informs the next steps of the system: what feedback is given when, what help is offered, which resources are shown or hidden, which activity is pushed next, … And GenAI’s strength is in the generation of plausible texts and not in the administering of meaningful and effective learning sequences. GenAIs can collect further textual data to further refine their LLM, but they do not (yet) collect learner information like bespoke learning environments and apps do. Thus, the individualization of learning processes – also when employing GenAI tools – is still the remit and responsibility of the teacher and must not be delegated to the machine. GenAI can adapt the generated text to the user’s prompt, but it is not designed to deliver or implement adaptive instruction (Schulze et al., 2025 in press). Adaptation of the machine to the learner was in ICALL because of the student model and pre-programmed feedback algorithms, for example. Also, tutorial CALL software in the early phase of CALL had instructional sequences of its activities hard-wired and had limited capability of adapting to the learning path of individual students and their learning preferences through inbuilt branching between activities, for example based on prior answers in the previous activities or overall score thresholds.

To be continued …

References

Dörnyei, Z. (2006). Individual Differences in Second Language Acquisition. AILA Review, 19, 42-68.

Schulze, M., Caws, C., Hamel, M.-J., & Heift, T. (2025 in press). Adaptive instruction. In G. Stockwell (Ed.), Cambridge Handbook of Technology in Language Teaching and Learning (pp. t.b.d.). Cambridge University Press.

Wolfram, S. (2023, February 14). What is ChatGPT doing … and why does it work? Stephen Wolfram Writings. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work