Navigating change — the Panta Rhei enterprise

This post is a replica of the original home page. (The current home page of the site is simply set to the list of recent posts in reverse chronological order.) Just thought you might be interested what is behind this blog. If you are a regular reader, … it cannot always be about AI 😉

Welcome

Thank you for coming by. Is it the blog that got you interested? Were you googling Panta Rhei? Are you thinking a lot about complexity and change? Have a look ’round and feel free to get in touch.

Chris and Mat started this site and blog in late 2019. We both like writing, and talking, about the complexities and simplicities of life, about working in groups and leadership, and about learning and teaching both in the chaotic and the virtual worlds. We both have years of experience in language and communication, education and training, management and leadership. We wanted to share our ideas, our expertise, and our insight with a wider audience. Worth a try, we thought. A lot has happened since then … Both of us are working full time. And Chris started dedicating his time to narrating audio books. Mat had put more emphasis on my writing per se. Learning new things, joining writing groups and courses, …

So, what is happening here?

Years of learning, reading, listening, experiencing, doing, reflecting, leading, following, smiling, crying, talking, writing, … Then it was time to share, time for this site, time for this blog:

On what they call … Artificial Intelligence

For his PhD, Mat wrote – what he likes to call – a research prototype of a grammar checker for learners of German. He then went on to write several articles and a book about the nexus of language learning and AI. Still, when GenAI fell on all of us, he was surprised by the rapid change and the immense power for a while. Then he began to learn and … write. Most recent blog posts are on the topic in one way or another.

On the complexity of change

Most of nature is complex. Most of society is complex. Human behavior is complex. And, as the cliché has it, the only constant is change. And this change is not linear. Sometimes it seems we soar ahead, sometimes it feels like we walk ’round in circles, and sometimes we are taken for a ride. On a rollercoaster. It is this complexity of change that Mat has been exploring, on which he has been reflecting. One blog post at the time. Thinking about it, reading about it, writing about it, … Learning about Chaos Theory, Complexity Science, and Dynamic Systems Theory, he has been doing for more than 15 years. And if you count dialectic — we are going on forty …

RoLL: Research on language and learning

Mat is paying for his daily bread, his shelter, and for what he considers to be luxuries with language and learning and teaching. So, when he writes about language and learning, it often is also about complexity and change, about technologies and artificial intelligence.

The posts Chris wrote on the BASE model are also still available.

Get in Touch

Look around a bit more. Or why not join the growing group of people who follow the Panta Rhei Blog? [In case you are wondering, the relevant button is in the top-right corner of each page or underneath the text and comment box, if you are reading this on your phone.] If you are unsure about the idea of following, really all it means is that, when something gets posted, you will get an alert, if you are on WordPress, or an email with a link and summary, if you are not.

If you have any comments, suggestions, or questions, comment right on the page or post or send a quick email to mschulze7980@gmail.com. I live and work in Southern California. If you happen to be in the area and would like to meet, again an email is good.

Find the contact details and social media handles on a separate page.

Panta Rhei – everything flows and changes, and so does this site. Come back again to see what changed.

Wishing you a wonderful day.

Connected forest lake in Algonquin Park
Algonquin Park, Ontario, Canada

Education and AI: Tool versus tutor

Of course, a language teacher is more than a benevolent conversation partner. In AI, an intelligent tutoring system (ITS) would be more akin to a language teacher than a chatbot would. An ITS consists of three interacting components (see Heift & Schulze, 2007):

  1. The expert model, which captures the domain knowledge or the information that students should learn;
  2. The tutor model, which makes decisions about the instructional sequences and steps as well as appropriate feedback and guidance for the group as a whole and for individual students;
  3. The student model, which records and structures information about the learning progress and instruction received, domain beliefs and acquired information, as well as the learning preferences and styles of each student.
This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. 
We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.

The full article is available as open access:
Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304

Only if the sole learning objective is conversational ability, can one assume that the LLM has elements of an expert model. The other two models, however, cannot be mimicked by a GenAI tool. Consequently, teachers still have to teach – determine instructional sequences, time appropriate feedback, remember and work with an individual student’s strengths and weaknesses – also when using GenAI tools in various phases of the learning process. GenAI tools can provide multiple ideas for engaging learning activities, texts for reading with a ready-made glossary, or drafts of an entire unit or lesson plan. However, it is the teacher who must understand, select, adapt, and implement them. The entire teaching process and its success are still the responsibility of the teacher.

Grammar teaching in antiquity
Grammar teaching in Ancient Rome (generated by ChatGPT 5.1)

In an educational institution, teachers can meet this responsibility because learners normally trust their expert knowledge, because teachers have been trained, certified, and frequently evaluated. The same is not (yet) true of GenAI tools. They have been trained through machine learning, but their semantic accuracy and pragmatic appropriateness have often been found lacking. The generated text is plausible, but not necessarily factually correct or complete. This way, GenAI output is an insufficient basis for successful learning. This becomes apparent not only when one tries out a GenAI tool in the area of one’s own expertise, but also when one looks back on what teachers have said about the various levels of trustworthiness of internet texts, which also formed the basis for the machine learning for LLMs, for the last thirty years: sources have to be checked and validated. In machine learning for LLMs, the texts and sources are not checked nor validated. This can impact the content accuracy of LLM output. Of course, learners cannot be expected to check the accuracy of information they are only about to learn; believing the truth value of the information is a prerequisite for learning. Critical analysis and questioning the information learnt is always a second step. Also, first studies have emerged that show that GenAI can create the illusion of knowing and thus of learning (Mollick, 2024); consequently, chatbots are not always a tool for successful learning.

The main thing to remember is: these GenAI chatbots are a tool and not a tutor – more like a hammer than an artisan, more like a dictionary than an interpreter, and more like an answering machine (remember those?) than a teacher.

References

Heift, Trude and Mathias Schulze (2007). Errors and intelligence in CALL: Parsers and pedagogues. Routledge.

Mollick, E. (2024). Post-apocalyptic education: What comes after the homework apocalypse. https://www.oneusefulthing.org/p/post-apocalyptic-education

Language and AI: A mathematical equation

The 70 years of AI (see McCarthy et al. (1955)) have seen an intertwining of language and computing. At first, computers, as the name says, were meant for computation, for the fast calculation of a few complex equations or many simple ones. It was later that calculations were done with texts as input. Famously, first successful computations of and with letters were done at the Government Code and Cypher School at Bletchley Park to break the German Enigma cipher as part of the British effort in World War II. After the mathematician Alan Turing and his colleagues deciphered messages by the German Luftwaffe and navy successfully, he proposed that these new machines could also be used for language (Turing (1948) quoted in Hutchins, 1986, pp. 26-27). The Turing test (Turing, 1950) stipulated that a calculating machine, a computer, could show intelligence if a human interlocutor on one side of a screen could not tell whether they had a conversation with another human or a machine on the other side of the screen. ChatGPT passed this test successfully in 2024 (Jones & Bergen, 2024).

Mathematical equations. Generated by ChatGPT 5 as an illustration

With the beginning of the Cold War, machine translation seemed to hold a lot of promise. Researchers’ predictions of success were based – at least in part – on the idea that translating from Russian into English is just like deciphering an encrypted message; letters have to be exchanged for other letters according to certain patterns in a deterministic mathematical process. Of course, this did not do justice to the complexities of language, communication, and translation. So, the then nascent field of natural language processing (NLP) turned to grammatical rules of formal (mathematical) grammar and items, the words in electronic dictionaries. The computer would “understand” a text by parsing it phrase by phrase to build an information structure similar to a syntactic tree, using grammatical rules. Such rules and the list of items with their linguistic features had to be hand-crafted. Therefore, the coverage of most NLP systems was limited. In the 1990s, researchers began to move away from symbolic NLP, which used linguistic symbols and rules and applied set theory, a form of mathematical logic, on to statistical NLP. Statistical NLP meant that language patterns were captured with calculating probabilities. The probability of one word (form) following some others is calculated for each word in a large principled collection of texts, which is called a corpus. In the 1990s and 2000s more and more corpora in more and more languages became available.

This is part of a draft of an article I wrote with Phil Hubbard. In this paper, we are proposing a way in which teachers can organize their own professional development (PD) in the context of the rapid expansion of Generative AI. 
We call this PD sustained integrated PD (GenAI-SIPD). Sustained because it is continuous and respectful of the other responsibilities and commitments teachers have; integrated because the PD activities are an integral part of what teachers do anyway; the teacher retains control of the PD process.

The full article is available as open access:
Hubbard, Philip and Mathias Schulze (2025) AI and the future of language teaching – Motivating sustained integrated professional development (SIPD). International Journal of Computer Assisted Language Learning and Teaching 15.1., 1–17. DOI:10.4018/IJCALLT.378304 https://www.igi-global.com/gateway/article/full-text-html/378304

In the 1990s, progress in capturing such probabilities was made because of the use of machine learning. Corpora could be used for machines to “learn” what the probability of certain word sequences is. This machine learning is based on statistics and mathematical optimization. In NLP, the probability of the next word in a text is calculated, and in training, that result is compared to the word that actually occurred in the text next. In case of an error, the equation used gets tweaked and the calculation process starts anew. The sequences of words are called n-grams.

The resulting n-gram models were replaced in the mid-2010s with artificial neural networks, resulting in the first generative pre-trained transformer (GPT) – GPT-1 – in 2018. This marks the beginning of GenAI as we know it today. GPTs are large language models (LLMs) from OpenAI. Today, an LLM is pre-trained using deep learning, which is a more complex subset of machine learning. The pre-training means that when processing the text prompt, each artificial neuron in the network of the LLM receives input from multiple neurons in the previous layer and carries out calculations and passes the result to neurons in the next layer. GPT-4, for example, processes text in 120 layers. The first layer converts the input words, or tokens, into vectors with 12,288 dimensions. The number in each of the 12,288 dimensions encodes syntactic, semantic, or contextual information. Through these calculations, the model provides a finer and finer linguistic analysis at each subsequent layer.

The enormous number of calculations – an estimated 7.5 million calculations for a sentence with five words – results in plausible text output and consumes a lot of electric power. The latter is the main cause for the environmental impact of GenAI. The former is the main factor in the attractiveness of GenAI not only in language education but also in industry and increasingly in society at large.

References

Hutchins, J. (1986). Machine translation: Past, present, and future. Ellis Horwood.

Jones, C. R., & Bergen, B. K. (2024). Does GPT-4 pass the Turing test? arXiv. https://doi.org/10.48550/arXiv.2310.20216

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

Turing, A. M. (1948). Intelligent Machinery (Report for the National Physical Laboratory). Reprinted in D. C. Ince (Ed.), Mechanical Intelligence: Collected Works of A. M. Turing (pp. 107–127). Amsterdam: North‐Holland. 

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433