Form and meaning

Hey, Friend,

Have a very good morning. It’s only 7am. I don’t write at this time. Normally. I should be asking ChatGPT or the other one that got a new name. Gemini. Are there two of them now? Or does the Bard want to be my twin? Don’t smirk; no comment. 

I’d rather continue writing about AI. It is more fruitful than writing with AI. Why? There is lots to say and lots to learn. Can you learn from it? Learn from ChatGPT? If you’d like. But it is just form. No meaning. And often we learn from meaning. Are you with me? Time to start at the beginning. 

Photo by Markus Winkler on Pexels.com

Form and meaning. In language. Let’s take a word. It’s a sign. A linguistic sign. Let’s not get too technical, there is no need for that in the early morning. Linguistic is just language. A linguistic sign is a sign in language. It can be a word. It can be smaller than a word. A meaningful part of it. (Yes, some of you know: it’s called a morph.) It can be larger than a word: a phrase, a sentence, a saying, a paragraph, a text, and yes, a whole novel. Alright back to the word. Like all other signs in language it has form and meaning. The form is material: some ink spilled, a sound wave, dark pixels on your screen. The meaning is not. Not material. This means we cannot sense it. We make sense of it. We cannot hear meaning, we cannot see it with our eyes, we cannot smell meaning, taste it, touch it. We make it. We impute it on the form. It is us who imbue the form with meaning. ChatGPT just gives us forms. Many forms. And quickly. I make the meaning, when I read it. It did not mean anything. (And computer scientists and linguists, AI specialists can tell us that there is a reason for it. But that will have to wait for another post on this blog.) Where were we? The sign. Form. And meaning. More than a hundred years ago, Ferdinand de Saussure told his students that the sign can be drawn as a triangle. An incomplete triangle: the sign on top and nothing at the bottom. Form and meaning at the bottom corners of the triangle and no line – no connection – between them. So, there is no connection between form and meaning? Wrong. There is. Look at the top. The sign. The sign connects form and meaning. And only the sign connects. If the form is not in a sign, it is not connected to meaning. If the meaning is not wrapped in a sign, it has no connection to any form. Who makes these signs? We do. Every day. Every moment. Every time. Every time we use a word or any smaller or larger linguistic sign. It’s use. Meaning is use. The way we use a word, the context we use the word in – that’s its meaning. Alright, it gets quite complex. And ChatGPT can’t do it. Even if Gemini comes to its aid. ChatGPT does not use words (Yes, there is tech behind that, too. Some other time; not that early in the morning). We use the words when we read what the chat says, what it fished out of the Large Language Model. Yes, when we use ChatGPT, we – and only we – make meaning. Conventionally. But conventions is another topic …

Thinking about AI

Hey, Friend,

Thinking about AI. This is something I have been doing a lot. For more than a year now. Alright. Full disclosure first. I like working with computers. Their handwriting is neater than mine. When I use a command line, the graph I “drew” looks so much better. And most importantly, the computer remembers stuff really well. Sometimes too well. My interest in language led me to some branches of artificial intelligence. First, I dabbled in natural language processing (NLP). Then I started learning about it. Writing about it. Not as a computer scientist. As a linguist. As a language teacher. That is why I paid attention when Alan Turing, the father of AI, hypothesized that AI can be useful for language learning. From the late 1990s until the 2010s, I talked about and wrote about AI, as we knew it then, in the context of language education. The computer as a grammar checker, as a writing aid, or as a reading aid. It was challenging. Making the computer analyze language – words and sentences first – was difficult enough. Making the computer analyze a sentence a language learner wrote was even harder. Researchers and graduate students tried different approaches to deal with learner text successfully, or at least a little more successfully. But hardly any of the NLP-based programs for language learning ended up in the classrooms or on the computers of learners. They were too specialized, too unreliable, too expensive to make, and too cumbersome too maintain. We learnt a lot, though, in this work. About computers. And about language learning.

And then in late 2022, Generative AI became a thing. People noticed. Google Translate and DeepL started producing better translations from many languages into many others. ChatGPT spits out texts in more languages than I can read. These tools are faster than proficient writers and certainly faster than even advanced language learners. And. As ChatGPT told me once itself, the tools are mostly more accurate than the majority of language learners in many languages. Not in all of them.

After having left AI alone for some years, I started reading and learning again. I was lucky: I had done it before. Some things in AI looked familiar; others were completely new to me. What had happened? What is happening? Let’s find out together. It takes more than one post to disentangle the many threads … I have been asked to contribute some chapters on AI to books on language learning and technology in the coming months. The invitations to speak about AI before teachers and applied linguists have started coming in. So, I will be sharing some of my notes and my thoughts in this blog.

Why? The topic of generative AI – and AI as a whole – and language and learning a language has a lot to do with complexity and change. It is a topic made for this blog. The growth of AI as we all witnessed it and are witnessing it in the public arena has been exponential. Exponential growth is a feature of some complex processes …

learning a language with the computer
Photo by Peter Olexa on Pexels.com

What’s AI got to do with it?

Photo by Pavel Danilyuk on Pexels.com

artificial intelligence – natural stupidity
artificial flowers – natural incense – sensible intel

Hey, Friend,

It’s been a while. A while ago – in January, precisely – I wrote this line. What’s AI got to do with it? Tina Turner was still alive and the writers of Hollywood were writing and not striking to protect themselves also from the likes of ChatGPT. I like ChatGPT, like I like gadgets. And I like to think about these things. So, what’s AI got to do with it? And what is ‘it’? Oh, and what is AI? ‘it’ is easy. For me. ‘it’ is language and learning a language. For language and learning, AI and I crossed paths. So what’s AI got to do with that? With language and learning?

In 1948, the English mathematician Alan Turing wrote an essay envisaging how computers could demonstrate their intelligence and made a list of five. “(i) Various games, for example, chess, noughts and crosses, bridge, poker; (ii) The learning of languages; (iii) Translation of languages; (iv) Cryptography; (v) Mathematics.” I am not sure whether Alan Turing, who is often called the father of Artificial Intelligence, meant with learning of languages that the computer would learn or the computer would be intelligent enough to help all of us learn another language. The latter – using computers in different ways to help learn a language – has interested me for many years. It still does. Fascinating! A number-crunching machine deals with language. That’s intelligence. Artificial intelligence. AI. And AI is so much bigger than just language and learning. Different branches in research. Research on intelligent machines – think robots and self-driving cars – on perception – think face recognition and telling a ball from a person’s head –  on problem solving – think chess playing and building a schedule for an entire university – on machine learning – think … Oh wait, this has to do with language – the large language models – which means it warrants its own post. And so does the one area that fronts language: natural language processing. Natural? To a computer scientist, programming languages must have felt more familiar, so they didn’t feel a need to add an adjective, they used ‘natural’ instead for our languages. But that’s a whole other post. Later.

Back to Alan Turing. He mentions language in three of five: learning of languages, translation of languages, and cryptography. Cryptography. Cracking the code. The code of the Enigma machine. Alan Turing was part of this British effort during World War II. The German wehrmacht, navy, and air force used this complex mechanical and electric cipher device for their secret communication. Alan Turing and the cryptoanalysts, math geeks, crossword puzzlers, and secretaries working at Bletchley Park near London, cracked the code of the Enigma machine. Almost everyone kept quiet about it for 50 years. Didn’t even call a friend. After that war and with the beginning of the Cold War, the Americans wanted machines to translate Russian texts quickly, and the Soviets pointed their machines at English. Machine translation. Artificial intelligence. And some computer scientists believed that translating a language was like cracking a secret code. Breaking the Russian code. But you can’t break a language. They realized that soon. You couldn’t use math to read Pushkin’s poetry, Tolstoi’s novels, or – in those days – the KGB reports and instructions. Now you can. With ChatGPT. It just uses math to write you a text. Translate a text. Fake a text. Many people find ChatGPT too long; they call it AI. That’s not wrong, but it’s not right either. Yes, the intelligent machines, the perception and problem solving are AI. ChatGPT is one example of a small, small area of AI. Generative AI. It generates. It’s the AI that can produce texts, images, audio, and other synthetic data. Synthetic. Like a nylon shirt.

And now I am digressing. So, let’s stop here and come back to reading text, writing text, understanding text. Soon. I hope. I have not been the most regular. With writing. This blog. Maybe I should get tech help. From ChatGPT. But that would be synthetic.