Education and AI: Tool versus tutor

Of course, a language teacher is more than a benevolent conversation partner. In AI, an intelligent tutoring system (ITS) would be more akin to a language teacher than a chatbot would. An ITS consists of three interacting components (see Heift & Schulze, 2007):

  1. The expert model, which captures the domain knowledge or the information that students should learn;
  2. The tutor model, which makes decisions about the instructional sequences and steps as well as appropriate feedback and guidance for the group as a whole and for individual students;
  3. The student model, which records and structures information about the learning progress and instruction received, domain beliefs and acquired information, as well as the learning preferences and styles of each student.
This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. 
We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.

The full article is available as open access:
Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304

Only if the sole learning objective is conversational ability, can one assume that the LLM has elements of an expert model. The other two models, however, cannot be mimicked by a GenAI tool. Consequently, teachers still have to teach – determine instructional sequences, time appropriate feedback, remember and work with an individual student’s strengths and weaknesses – also when using GenAI tools in various phases of the learning process. GenAI tools can provide multiple ideas for engaging learning activities, texts for reading with a ready-made glossary, or drafts of an entire unit or lesson plan. However, it is the teacher who must understand, select, adapt, and implement them. The entire teaching process and its success are still the responsibility of the teacher.

Grammar teaching in antiquity
Grammar teaching in Ancient Rome (generated by ChatGPT 5.1)

In an educational institution, teachers can meet this responsibility because learners normally trust their expert knowledge, because teachers have been trained, certified, and frequently evaluated. The same is not (yet) true of GenAI tools. They have been trained through machine learning, but their semantic accuracy and pragmatic appropriateness have often been found lacking. The generated text is plausible, but not necessarily factually correct or complete. This way, GenAI output is an insufficient basis for successful learning. This becomes apparent not only when one tries out a GenAI tool in the area of one’s own expertise, but also when one looks back on what teachers have said about the various levels of trustworthiness of internet texts, which also formed the basis for the machine learning for LLMs, for the last thirty years: sources have to be checked and validated. In machine learning for LLMs, the texts and sources are not checked nor validated. This can impact the content accuracy of LLM output. Of course, learners cannot be expected to check the accuracy of information they are only about to learn; believing the truth value of the information is a prerequisite for learning. Critical analysis and questioning the information learnt is always a second step. Also, first studies have emerged that show that GenAI can create the illusion of knowing and thus of learning (Mollick, 2024); consequently, chatbots are not always a tool for successful learning.

The main thing to remember is: these GenAI chatbots are a tool and not a tutor – more like a hammer than an artisan, more like a dictionary than an interpreter, and more like an answering machine (remember those?) than a teacher.

References

Heift, Trude and Mathias Schulze (2007). Errors and intelligence in CALL: Parsers and pedagogues. Routledge.

Mollick, E. (2024). Post-apocalyptic education: What comes after the homework apocalypse. https://www.oneusefulthing.org/p/post-apocalyptic-education

Language and AI: A mathematical equation

The 70 years of AI (see McCarthy et al. (1955)) have seen an intertwining of language and computing. At first, computers, as the name says, were meant for computation, for the fast calculation of a few complex equations or many simple ones. It was later that calculations were done with texts as input. Famously, first successful computations of and with letters were done at the Government Code and Cypher School at Bletchley Park to break the German Enigma cipher as part of the British effort in World War II. After the mathematician Alan Turing and his colleagues deciphered messages by the German Luftwaffe and navy successfully, he proposed that these new machines could also be used for language (Turing (1948) quoted in Hutchins, 1986, pp. 26-27). The Turing test (Turing, 1950) stipulated that a calculating machine, a computer, could show intelligence if a human interlocutor on one side of a screen could not tell whether they had a conversation with another human or a machine on the other side of the screen. ChatGPT passed this test successfully in 2024 (Jones & Bergen, 2024).

Mathematical equations. Generated by ChatGPT 5 as an illustration

With the beginning of the Cold War, machine translation seemed to hold a lot of promise. Researchers’ predictions of success were based – at least in part – on the idea that translating from Russian into English is just like deciphering an encrypted message; letters have to be exchanged for other letters according to certain patterns in a deterministic mathematical process. Of course, this did not do justice to the complexities of language, communication, and translation. So, the then nascent field of natural language processing (NLP) turned to grammatical rules of formal (mathematical) grammar and items, the words in electronic dictionaries. The computer would “understand” a text by parsing it phrase by phrase to build an information structure similar to a syntactic tree, using grammatical rules. Such rules and the list of items with their linguistic features had to be hand-crafted. Therefore, the coverage of most NLP systems was limited. In the 1990s, researchers began to move away from symbolic NLP, which used linguistic symbols and rules and applied set theory, a form of mathematical logic, on to statistical NLP. Statistical NLP meant that language patterns were captured with calculating probabilities. The probability of one word (form) following some others is calculated for each word in a large principled collection of texts, which is called a corpus. In the 1990s and 2000s more and more corpora in more and more languages became available.

This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. 
We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.

The full article is available as open access:
Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304

In the 1990s, progress in capturing such probabilities was made because of the use of machine learning. Corpora could be used for machines to “learn” what the probability of certain word sequences is. This machine learning is based on statistics and mathematical optimization. In NLP, the probability of the next word in a text is calculated, and in training, that result is compared to the word that actually occurred in the text next. In case of an error, the equation used gets tweaked and the calculation process starts anew. The sequences of words are called n-grams.

The resulting n-gram models were replaced in the mid-2010s with artificial neural networks, resulting in the first generative pre-trained transformer (GPT) – GPT-1 – in 2018. This marks the beginning of GenAI as we know it today. GPTs are large language models (LLMs) from OpenAI. Today, an LLM is pre-trained using deep learning, which is a more complex subset of machine learning. The pre-training means that when processing the text prompt, each artificial neuron in the network of the LLM receives input from multiple neurons in the previous layer and carries out calculations and passes the result to neurons in the next layer. GPT-4, for example, processes text in 120 layers. The first layer converts the input words, or tokens, into vectors with 12,288 dimensions. The number in each of the 12,288 dimensions encodes syntactic, semantic, or contextual information. Through these calculations, the model provides a finer and finer linguistic analysis at each subsequent layer.

The enormous number of calculations – an estimated 7.5 million calculations for a sentence with five words – results in plausible text output and consumes a lot of electric power. The latter is the main cause for the environmental impact of GenAI. The former is the main factor in the attractiveness of GenAI not only in language education but also in industry and increasingly in society at large.

References

Hutchins, J. (1986). Machine translation: Past, present, and future. Ellis Horwood.

Jones, C. R., & Bergen, B. K. (2024). Does GPT-4 pass the Turing test? arXiv. https://doi.org/10.48550/arXiv.2310.20216

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

Turing, A. M. (1948). Intelligent Machinery (Report for the National Physical Laboratory). Reprinted in D. C. Ince (Ed.), Mechanical Intelligence: Collected Works of A. M. Turing (pp. 107–127). Amsterdam: North‐Holland. 

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433

Translation and AI: Separated by a common language

In the interaction with a chatbot, one can change the language or prompt the machine to reply in another language than that of the prompt or request a translation of a text generated previously. It is therefore not surprising that dedicated machine translation (MT), such as Google Translate and DeepL, relies on LLMs and thus artificial neural networks in the way that GenAI chatbots do. MT was there at the beginning of AI (see above) and was rooted in symbolic approaches, using grammatical rules and lexical items. MT output had to be post-edited by human translators as a matter of course. Such post-editing tasks and the critical reading and analysis of MT output have also been used in language learning with some success (e.g., Niño, 2009). Post-editing was necessary and language learning with MT was useful, because the linguistic accuracy of the MT output was such that it needed post-editing, and it contained more errors than most language learners would make. Today the MT output has high levels of linguistic accuracy and complexity similar to the turns generated by GenAI chatbots. Based on our impression over the years, we would submit that MT output is usually more complex and accurate than the writing of many language learners.

Machine translation in the Cold War
Machine translation in the Cold War. Generated by ChatGPT 5.1

Ohashi (2024) states, “Numerous studies have been conducted on the use of MT in language education” (p.292) recently and discusses several review and literature studies. Lee and Kang (2024) conclude from their study “that MT helped students deliver their meaning, reduce grammatical errors, find appropriate vocabulary, and use expressions and sentence structures beyond their current levels” (p. 12) Yet they also admit that the improved accuracy of the translated texts is not necessarily an indication of successful and sustained language learning. 

This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. 
We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.

The full article is available as open access:
Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304

Schulze (2025a) highlights some of the problems of generating written texts in the language being learned: the speed of the text generation does not encourage planning, thinking, and intentional engagement and the plausibility of the machine output makes processes of checking and correction very difficult if not impossible, especially for learners who use GenAI tools habitually. This applies in equal measure to GenAI MT. As a matter of fact, the conundrum of GenAI as a powerful tool in multilingual communication becomes clearer when one looks at MT. The speed and linguistic accuracy of translation makes it feasible to have a GenAI-tool-mediated conversation at almost normal interactional speed, with both sides only producing and receiving text in their first language and not being able to check the communicative adequacy and felicity of either. Language teachers will have to determine their stance vis-à-vis MT and, more importantly, the continued motivation to go through the long process of learning another language in times of instant results from machine translation and chatbots.

Summing up this excursion into GenAI, we can conclude with the essence of the seven lessons in Schulze (2025b) and state that

  • This new technology facilitates the exposure to rich and authentic language and,
  • GenAI potentially enriches learning with additional opportunities of communicative interaction and language use, because it offers a new way of communicating;
  • And yet necessary processes of appropriate error correction and feedback as well as documenting learner behavior and dynamic individualization cannot be performed using GenAI tools and remain the responsibility of the teacher. 

References

Lee, S. M., & Kang, N. (2024). Effects of machine translation on L2 writing proficiency: The complexity, accuracy, lexical diversity, and fluency. Language Learning & Technology, 28(1), 1–19. https://doi.org/10.1016/j.langlt.2024.73585

Niño, A. (2009). Machine translation in foreign language learning: Language learners’ and tutors’ perceptions of its advantages and disadvantages. ReCALL, 21(2), 241–258. https://doi.org/10.1017/S0958344009000172

Ohashi, L. (2024). AI in language education: The impact of machine translation and ChatGPT. In P. Ilic, I. Casebourne, & R. Wegerif (Eds.), Artificial intelligence in education: The intersection of technology and pedagogy (pp. 289–311). Springer. https://doi.org/10.1007/978-3-031-71232-6_13

Schulze, M. (2025a). The impact of artificial intelligence (AI) on CALL pedagogies. In L. McCallum & D. Tafazoli (Eds.) The Palgrave Encyclopedia of Computer-Assisted Language Learning. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-51447-0_7-1.

Schulze, M. (2025b). ICALL and AI: Seven lessons from seventy years. In Y. Wang, A. Alm, & G. Dizon (Eds.), Insights into AI and language teaching and learning. (pp. 11–31) Castledown Publishers. https://doi.org/10.29140/9781763711600-02