Thinking about AI

Hey, Friend,

Thinking about AI. This is something I have been doing a lot. For more than a year now. Alright. Full disclosure first. I like working with computers. Their handwriting is neater than mine. When I use a command line, the graph I “drew” looks so much better. And most importantly, the computer remembers stuff really well. Sometimes too well. My interest in language led me to some branches of artificial intelligence. First, I dabbled in natural language processing (NLP). Then I started learning about it. Writing about it. Not as a computer scientist. As a linguist. As a language teacher. That is why I paid attention when Alan Turing, the father of AI, hypothesized that AI can be useful for language learning. From the late 1990s until the 2010s, I talked about and wrote about AI, as we knew it then, in the context of language education. The computer as a grammar checker, as a writing aid, or as a reading aid. It was challenging. Making the computer analyze language – words and sentences first – was difficult enough. Making the computer analyze a sentence a language learner wrote was even harder. Researchers and graduate students tried different approaches to deal with learner text successfully, or at least a little more successfully. But hardly any of the NLP-based programs for language learning ended up in the classrooms or on the computers of learners. They were too specialized, too unreliable, too expensive to make, and too cumbersome too maintain. We learnt a lot, though, in this work. About computers. And about language learning.

And then in late 2022, Generative AI became a thing. People noticed. Google Translate and DeepL started producing better translations from many languages into many others. ChatGPT spits out texts in more languages than I can read. These tools are faster than proficient writers and certainly faster than even advanced language learners. And. As ChatGPT told me once itself, the tools are mostly more accurate than the majority of language learners in many languages. Not in all of them.

After having left AI alone for some years, I started reading and learning again. I was lucky: I had done it before. Some things in AI looked familiar; others were completely new to me. What had happened? What is happening? Let’s find out together. It takes more than one post to disentangle the many threads … I have been asked to contribute some chapters on AI to books on language learning and technology in the coming months. The invitations to speak about AI before teachers and applied linguists have started coming in. So, I will be sharing some of my notes and my thoughts in this blog.

Why? The topic of generative AI – and AI as a whole – and language and learning a language has a lot to do with complexity and change. It is a topic made for this blog. The growth of AI as we all witnessed it and are witnessing it in the public arena has been exponential. Exponential growth is a feature of some complex processes …

learning a language with the computer
Photo by Peter Olexa on Pexels.com

What’s AI got to do with it?

Photo by Pavel Danilyuk on Pexels.com

artificial intelligence – natural stupidity
artificial flowers – natural incense – sensible intel

Hey, Friend,

It’s been a while. A while ago – in January, precisely – I wrote this line. What’s AI got to do with it? Tina Turner was still alive and the writers of Hollywood were writing and not striking to protect themselves also from the likes of ChatGPT. I like ChatGPT, like I like gadgets. And I like to think about these things. So, what’s AI got to do with it? And what is ‘it’? Oh, and what is AI? ‘it’ is easy. For me. ‘it’ is language and learning a language. For language and learning, AI and I crossed paths. So what’s AI got to do with that? With language and learning?

In 1948, the English mathematician Alan Turing wrote an essay envisaging how computers could demonstrate their intelligence and made a list of five. “(i) Various games, for example, chess, noughts and crosses, bridge, poker; (ii) The learning of languages; (iii) Translation of languages; (iv) Cryptography; (v) Mathematics.” I am not sure whether Alan Turing, who is often called the father of Artificial Intelligence, meant with learning of languages that the computer would learn or the computer would be intelligent enough to help all of us learn another language. The latter – using computers in different ways to help learn a language – has interested me for many years. It still does. Fascinating! A number-crunching machine deals with language. That’s intelligence. Artificial intelligence. AI. And AI is so much bigger than just language and learning. Different branches in research. Research on intelligent machines – think robots and self-driving cars – on perception – think face recognition and telling a ball from a person’s head –  on problem solving – think chess playing and building a schedule for an entire university – on machine learning – think … Oh wait, this has to do with language – the large language models – which means it warrants its own post. And so does the one area that fronts language: natural language processing. Natural? To a computer scientist, programming languages must have felt more familiar, so they didn’t feel a need to add an adjective, they used ‘natural’ instead for our languages. But that’s a whole other post. Later.

Back to Alan Turing. He mentions language in three of five: learning of languages, translation of languages, and cryptography. Cryptography. Cracking the code. The code of the Enigma machine. Alan Turing was part of this British effort during World War II. The German wehrmacht, navy, and air force used this complex mechanical and electric cipher device for their secret communication. Alan Turing and the cryptoanalysts, math geeks, crossword puzzlers, and secretaries working at Bletchley Park near London, cracked the code of the Enigma machine. Almost everyone kept quiet about it for 50 years. Didn’t even call a friend. After that war and with the beginning of the Cold War, the Americans wanted machines to translate Russian texts quickly, and the Soviets pointed their machines at English. Machine translation. Artificial intelligence. And some computer scientists believed that translating a language was like cracking a secret code. Breaking the Russian code. But you can’t break a language. They realized that soon. You couldn’t use math to read Pushkin’s poetry, Tolstoi’s novels, or – in those days – the KGB reports and instructions. Now you can. With ChatGPT. It just uses math to write you a text. Translate a text. Fake a text. Many people find ChatGPT too long; they call it AI. That’s not wrong, but it’s not right either. Yes, the intelligent machines, the perception and problem solving are AI. ChatGPT is one example of a small, small area of AI. Generative AI. It generates. It’s the AI that can produce texts, images, audio, and other synthetic data. Synthetic. Like a nylon shirt.

And now I am digressing. So, let’s stop here and come back to reading text, writing text, understanding text. Soon. I hope. I have not been the most regular. With writing. This blog. Maybe I should get tech help. From ChatGPT. But that would be synthetic.