Complexity Science, Chaos Theory, Dynamic Systems Theory, complex dynamic systems, … These theories and approaches provide a good lens to gain some insight into personal and social processes. They help with understanding and embracing change.
Before 1916, the city of Kitchener, Ontario, Canada, was called Berlin. Ten years ago in 2016, colleagues from the Waterloo Centre for German Studies and I organized a panel discussion that marked the 100th anniversary of that name change. This discussion took place and was recorded in the Kitchener Public Library. Carl Zehrs, the former mayor of Kitchener, was our moderator. The well known local historian, rych mills, the history professor and my colleague at the time, Geoff Hayes, and I were the panelists.
In 1916, 75% of the population of Berlin, Ontario, spoke German. They believed they could be both – loyal to the British Crown and to the German Emperor. This got very difficult in the middle of World War I. Two separate referenda determined to change the city’s name. 100 years later, the city of Kitchener still has a sizable German minority, the name change had become a part of local lore, but also – at times – of passionate debate.
Video recording of the panel discussion on youtube
I have dug up a couple of older videos on the internet. Some of them have to do with my current thinking about AI, this one and a few others are related to my research but not to AI and language and learning. The connection, as almost always, is language …
Of course, a language teacher is more than a benevolent conversation partner. In AI, an intelligent tutoring system (ITS) would be more akin to a language teacher than a chatbot would. An ITS consists of three interacting components (see Heift & Schulze, 2007):
The expert model, which captures the domain knowledge or the information that students should learn;
The tutor model, which makes decisions about the instructional sequences and steps as well as appropriate feedback and guidance for the group as a whole and for individual students;
The student model, which records and structures information about the learning progress and instruction received, domain beliefs and acquired information, as well as the learning preferences and styles of each student.
This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.
The full article is available as open access: Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304
Only if the sole learning objective is conversational ability, can one assume that the LLM has elements of an expert model. The other two models, however, cannot be mimicked by a GenAI tool. Consequently, teachers still have to teach – determine instructional sequences, time appropriate feedback, remember and work with an individual student’s strengths and weaknesses – also when using GenAI tools in various phases of the learning process. GenAI tools can provide multiple ideas for engaging learning activities, texts for reading with a ready-made glossary, or drafts of an entire unit or lesson plan. However, it is the teacher who must understand, select, adapt, and implement them. The entire teaching process and its success are still the responsibility of the teacher.
Grammar teaching in Ancient Rome (generated by ChatGPT 5.1)
In an educational institution, teachers can meet this responsibility because learners normally trust their expert knowledge, because teachers have been trained, certified, and frequently evaluated. The same is not (yet) true of GenAI tools. They have been trained through machine learning, but their semantic accuracy and pragmatic appropriateness have often been found lacking. The generated text is plausible, but not necessarily factually correct or complete. This way, GenAI output is an insufficient basis for successful learning. This becomes apparent not only when one tries out a GenAI tool in the area of one’s own expertise, but also when one looks back on what teachers have said about the various levels of trustworthiness of internet texts, which also formed the basis for the machine learning for LLMs, for the last thirty years: sources have to be checked and validated. In machine learning for LLMs, the texts and sources are not checked nor validated. This can impact the content accuracy of LLM output. Of course, learners cannot be expected to check the accuracy of information they are only about to learn; believing the truth value of the information is a prerequisite for learning. Critical analysis and questioning the information learnt is always a second step. Also, first studies have emerged that show that GenAI can create the illusion of knowing and thus of learning (Mollick, 2024); consequently, chatbots are not always a tool for successful learning.
The main thing to remember is: these GenAI chatbots are a tool and not a tutor – more like a hammer than an artisan, more like a dictionary than an interpreter, and more like an answering machine (remember those?) than a teacher.
References
Heift, Trude and Mathias Schulze (2007). Errors and intelligence in CALL: Parsers and pedagogues. Routledge.
The 70 years of AI (see McCarthy et al. (1955)) have seen an intertwining of language and computing. At first, computers, as the name says, were meant for computation, for the fast calculation of a few complex equations or many simple ones. It was later that calculations were done with texts as input. Famously, first successful computations of and with letters were done at the Government Code and Cypher School at Bletchley Park to break the German Enigma cipher as part of the British effort in World War II. After the mathematician Alan Turing and his colleagues deciphered messages by the German Luftwaffe and navy successfully, he proposed that these new machines could also be used for language (Turing (1948) quoted in Hutchins, 1986, pp. 26-27). The Turing test (Turing, 1950) stipulated that a calculating machine, a computer, could show intelligence if a human interlocutor on one side of a screen could not tell whether they had a conversation with another human or a machine on the other side of the screen. ChatGPT passed this test successfully in 2024 (Jones & Bergen, 2024).
Mathematical equations. Generated by ChatGPT 5 as an illustration
With the beginning of the Cold War, machine translation seemed to hold a lot of promise. Researchers’ predictions of success were based – at least in part – on the idea that translating from Russian into English is just like deciphering an encrypted message; letters have to be exchanged for other letters according to certain patterns in a deterministic mathematical process. Of course, this did not do justice to the complexities of language, communication, and translation. So, the then nascent field of natural language processing (NLP) turned to grammatical rules of formal (mathematical) grammar and items, the words in electronic dictionaries. The computer would “understand” a text by parsing it phrase by phrase to build an information structure similar to a syntactic tree, using grammatical rules. Such rules and the list of items with their linguistic features had to be hand-crafted. Therefore, the coverage of most NLP systems was limited. In the 1990s, researchers began to move away from symbolic NLP, which used linguistic symbols and rules and applied set theory, a form of mathematical logic, on to statistical NLP. Statistical NLP meant that language patterns were captured with calculating probabilities. The probability of one word (form) following some others is calculated for each word in a large principled collection of texts, which is called a corpus. In the 1990s and 2000s more and more corpora in more and more languages became available.
This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.
The full article is available as open access: Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304
In the 1990s, progress in capturing such probabilities was made because of the use of machine learning. Corpora could be used for machines to “learn” what the probability of certain word sequences is. This machine learning is based on statistics and mathematical optimization. In NLP, the probability of the next word in a text is calculated, and in training, that result is compared to the word that actually occurred in the text next. In case of an error, the equation used gets tweaked and the calculation process starts anew. The sequences of words are called n-grams.
The resulting n-gram models were replaced in the mid-2010s with artificial neural networks, resulting in the first generative pre-trained transformer (GPT) – GPT-1 – in 2018. This marks the beginning of GenAI as we know it today. GPTs are large language models (LLMs) from OpenAI. Today, an LLM is pre-trained using deep learning, which is a more complex subset of machine learning. The pre-training means that when processing the text prompt, each artificial neuron in the network of the LLM receives input from multiple neurons in the previous layer and carries out calculations and passes the result to neurons in the next layer. GPT-4, for example, processes text in 120 layers. The first layer converts the input words, or tokens, into vectors with 12,288 dimensions. The number in each of the 12,288 dimensions encodes syntactic, semantic, or contextual information. Through these calculations, the model provides a finer and finer linguistic analysis at each subsequent layer.
The enormous number of calculations – an estimated 7.5 million calculations for a sentence with five words – results in plausible text output and consumes a lot of electric power. The latter is the main cause for the environmental impact of GenAI. The former is the main factor in the attractiveness of GenAI not only in language education but also in industry and increasingly in society at large.
References
Hutchins, J. (1986). Machine translation: Past, present, and future. Ellis Horwood.
Turing, A. M. (1948). Intelligent Machinery (Report for the National Physical Laboratory). Reprinted in D. C. Ince (Ed.), Mechanical Intelligence: Collected Works of A. M. Turing (pp. 107–127). Amsterdam: North‐Holland.