Language Learning and AI: 7 lessons from 70 years (#7)

7. Gradual release of responsibility

Instructional sequences and other learning processes are structured according to pedagogical guidelines and principles and specific teaching methods. For reasons of brevity, we chose one commonly employed method – the gradual release of responsibility (Fisher & Frey, 2021). In an instructional sequence, the responsibility for the process and its outcomes is shifted from the teacher to the learner. Starting with Focused Instruction (I do it) and moving to Guided Instruction (We do it), more and more responsibility is transferred to the student in the latter two phases Collaborative Learning (You do it together) and Independent Learning (You do it alone). It is mainly the locus of control that shifts gradually from the teacher to the learner.

With this one, all seven lessons have been prepared. All parts are based on a manuscript for a book chapter that I wrote recently. Prepping lesson #7.
Mother helping child operate a washing machine
Photo by PNW Production on Pexels.com

If the sequences of learning activities and the algorithm for guidance and feedback are hardwired in the system and hardly adapt to an individual learner and their behavior – as was the case in most ICALL and in tutorial CALL in general – then the control of processes is largely with the machine. To put it polemically, the learner’s choices are limited to using the ICALL or tutorial app or not. At first sight, this is different with GenAI. Learners can request specific texts and then request something different. Everything can be translated from one language into another, all questions will get an answer – it might not be correct – and all prompts get a reply. The student decides what and how much will be generated at what time. The generation is fast and often faster than most humans can type. This means that the locus of control is largely with the learner in this respect.

My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.

The GenAI controls the generation process. The many hidden layers of the ANN mean that how the GPT transforms the input, for example the prompt, to the output the learner can read, for example an answer to a question. The problem here is for learner to be able to learn, they need to be able trust the truth value and relevance of text they received. Since the GPT with its LLM remains impenetrable even for the computer scientists who ran the deep (machine) learning to train the model and thus the artificial neural network due its enormous complexity, it is almost impossible to check the generated text output within the system. Currently, because all GenAI users are new users, teachers and students can rely on previously learned information – information that was not generated by a GenAI – to compare the output they received to what they know already. However, one can conduct a thought experiment already: if we learn more and more from generated texts, then we have less and less prior ‘independent’ information that we can use to check the GenAI output for errors …

The more immediate conundrum is the trust all learners need to put into information they are being taught and do not know (and thus cannot check easily). Because of their institutionalized power and prior training and accreditation, teachers normally get the trust of their students; students trust the information they are taught. Especially during the phase of Focused Instruction, if this instruction is given via GenAI generated text, learners do not know how much trust they can place in the information they obtain from the text. Here again, it is the responsibility of the teacher to control the process and, if need be, check the taught information. This means that the gradual release of responsibility from the teacher to the student must be almost parallel to the ‘release of responsibility’ from the teacher to the machine. Whereas an ICALL ITS was a rigid and often limited ‘tutor’, GenAI must not replace the human teacher and can only be useful as the learning partner in the third phase of Collaborative Learning (You [learner and GenAI] do it together) and as a helper in the Guided Instruction phase (You [teacher, GenAI, and learner] do it) with the teacher in the lead. It appears that the current GenAI does not have a role in the individual teacher phase (Focused instruction [I do it]) nor in the individual student phase (independent Learning [You do it]). Teachers should not abdicate their role in the initial teaching of new material; and students cannot have their independent learning done for them by a machine.

To be concluded …

References

Fisher, D., & Frey, N. (2021). Better Learning Through Structured Teaching: A Framework for the Gradual Release of Responsibility (3rd ed.). ASCD.

Language Learning and AI: 7 lessons from 70 years (#5)

5. Recording learner behavior and student modeling

The intelligent tutoring systems in ICALL had this knowledge stored in a student model (Schulze, 2012). Student modeling (e.g., Bull, 1993; Bull, 1994, 2000; Mabbott & Bull, 2004; McCalla, 1992; Michaud & McCoy, 2000; Schulze, 2008; Self, 1974; Tsiriga & Virvou, 2003) is a challenging endeavor; student data needs to be recorded and structured into a student profile, then inferences can be drawn to construct a student model over time. The model has structured information about prior learning, learner beliefs, strategies, and preferences, and language beliefs. Basically, it models the information teachers have about their students both through student records and the teacher’s experience. Such information helps to tailor instructional sequences, guidance and help, and corrective feedback individually so that it becomes relevant and most effective. GenAIs have LLMs which contain enormous information about language and languages (Wolfram, 2023, February 14); their knowledge of the learner is often non-existent or serendipitous at best. Currently and in the context of language education and especially in the context of previous research in ICALL and student modeling in general, the lack of a student model means that GenAIs cannot be treated nor employed as an intelligent tutoring system (ITS), because ITS consist of a knowledge base, a student model, and a pedagogical module (Wikipedia contributors, 2024, December 20) to imitate the behavior of a human tutor and provide individualized tutoring.

My inspiration for this title came from the book  
Snyder, T. (2017). On tyranny: Twenty lessons from the twentieth century. Tim Duggan Books.

I am sharing these early drafts of a book chapter I published in
Yijen Wang, Antonie Alm, & Gilbert Dizon (Eds.) (2025), 
Insights into AI and language teaching and learning.
Castledown Publishers.

https://doi.org/10.29140/9781763711600-02.
Photo by Anna Tarazevich on Pexels.com
Thus far, I have given a historical introduction and talked about the necessary exposure to authentic language, communication in context, interaction in language learning with GenAI, and appropriate error correction and contingent feedback. The following describes the basis for lesson #5.

To be continued …

References

Bull, S. (1993). Towards User/System Collaboration in Developing a Student Model for Intelligent Computer-Assisted Language Learning. Computer Assisted Language Learning, 8, 3-8.

Bull, S. (1994). Student modeling for second language acquisition. Computers and Education, 23(1-2), 13-20.

Bull, S. (2000). ‘Do It Yourself’ Student Models for Collaborative Student Modelling and Peer Interaction. In B. P. Goettl, H. M. Halff, C. Redfield Luckhardt, & V. J. Shute (Eds.), Intelligent Tutoring Systems. 4th International Conference, ITS ’98, San Antonio, Texas, USA,  August 16-19, 1998 Proceedings (pp. 176-185). Springer Verlag.

Mabbott, A., & Bull, S. (2004). Alternative Views on Knowledge: Presentation of Open Learner Models. In J. C. Lester, R. M. Vicari, & F. Paraguacu (Eds.), Intelligent Tutoring Systems: 7th International Conference (pp. 689-698). Springer-Verlag.

McCalla, G. I. (1992). The Centrality of Student Modelling to Intelligent Tutoring Systems. In E. Costa (Ed.), New Directions for Intelligent Tutoring Systems (pp. 107-131). Springer Verlag.

Michaud, L. N., & McCoy, K. F. (2000). Supporting Intelligent Tutoring in CALL by Modeling the User’s Grammar. In Proceedings of the Thirteenth Annual International Florida Artificial Intelligence Research Symposium, May 22-24, 2000, Orlando, Florida (pp. 50-54). AAAI Press.

Schulze, M. (2008). Modeling SLA Processes Using NLP. In C. Chapelle, Y.-R. Chung, & J. Xu (Eds.), Towards Adaptive CALL: Natural Language Processing for Diagnostic Assessment. (pp. 149-166). Iowa State University. https://apling.engl.iastate.edu/wp-content/uploads/sites/221/2015/05/5thTSLL2007_proceedings.pdf

Schulze, M. (2012). Learner modeling. In C. A. Chapelle (Ed.), The Encyclopaedia of Applied Linguistics. 10 volumes (pp. online n.p.). Wiley-Blackwell.

Self, J. A. (1974). Student Models in Computer-Aided Instruction. International Journal of Man-Machine Studies, 6, 261-276.

Tsiriga, V., & Virvou, M. (2003). Modelling the Student to Individualise Tutoring in a Web-Based ICALL. International Journal of Continuing Engineering Education and Life-Long Learning, 13(3-4), 350-365.

Wikipedia contributors. (2024, December 20). Intelligent tutoring system. In Wikipedia, The Free Encyclopedia.

Wolfram, S. (2023, February 14). What is ChatGPT doing … and why does it work? Stephen Wolfram Writings. https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work

Rupture

Photo by Pixabay on Pexels.com

Rupture!

Writing gives me a chance to think. It does not happen very often. I need to make it happen. Often. As often as I can. The thinking. The writing helps. Helps me remember. Helps me to slow down. Slow down my thinking. When it’s slow, it gets deeper. Alright. That’s a cliché. No, it’s not. Who said that. I got interrupted. Now my stream of thoughts has been disrupted. How did that happen? 

Let me go back to the chance to think. About disruption. Disruptive. Is this good or bad? The question is too simple, too linear. Just one alternative. And there are so many. Alternatives. Alternatives after a disruption. It’s complex. Linear is just one of a zillion alternatives. Is zillion a number? Apparently not. I learnt that at trivia night three weeks ago. Interrupted again. I was thinking about disruption. In recent posts, I was talking about AI. Generative AI. Technology. And now disruption. Disruptive technologies.

No, I am not getting all businessy in this blog. Business folk like to talk about disruptive technologies. And so do I. It happens all the time. The disruption. Film disrupted theater. TV disrupted movie theaters. Video cassettes disrupted movie theaters too. VCR. VCR disrupted BetaMax. BetaMax was of better quality. It was discontinued. VCR prevailed. Until … Until the DVD disruption came. Netflix used to send out VCR cassettes. Streaming services disrupted the DVD. During COVID lockdown new films were streamed. The theaters were closed. 

Is this all good or bad? You decide. All of you. Each time you decide. Again. And again. Each of you. Separately. And together. It’s complex. It can’t be linear. There are a zillion alternatives. And sometimes only one seems to prevail. For a short time. A disruption. And another one. In different areas. Not just film and theater and video. A disruption. Disruptive technologies. And we are taken by surprise. At times.

Disruptive technologies. And since 2022 we have been talking about AI. A disruption? AI disruption. Sure. What will it bring? What will we gain? What will we lose? In learning and for teachers, we read about new tools. The lesson plan that writes itself? The text the kids will read that was generated on the teacher’s computer. The feedback the machine gave, the errors corrected. With new errors?

Rupture.