Perhaps you still remember when the first iPhone was unveiled in 2007? Even if you didn't follow it on the screen, word quickly got around that a device was being introduced that was quite different from anything that had gone before. In the following years, smartphones have changed our everyday lives and many apps have become practical helpers in the process: travelling by public transport has become simpler because you can view connections and buy a ticket directly. No matter whether you are travelling on foot, by bike or by car, the sat navs almost always help.
But we all also know the stories of what happens when people blindly rely on the information provided by navigation apps: no matter whether you end up in the river because in reality there is no bridge there but a ferry, or the car gets stuck in the forest because the forest track was stored in the sat nav as a road – if you rely on the software without thinking for yourself, you sometimes get into trouble.
We can observe something similar at the moment with artificial intelligence (AI). Since ChatGPT (the name stands for Generative Pre-trained Transformer) became better known in November last year, real hype has erupted. It is an AI that has been trained over the last few years with millions of texts and with which you can chat just like with a human counterpart. When using it, you get the impression that you are dealing with a human being. In fact, it is software that uses statistical methods to pick the answer that seems most likely to it. We know this in a simple form when we type a text on our smartphone and the next word is suggested to us. This is often correct, but not always. Something similar is being observed by many people at the moment. AI very often provides the right answers, but sometimes it is completely off the mark and makes up information – interestingly this process is also called hallucinating. In the latter situation spurious Information is simply added or omitted, see the example below.
What does ChatGPT mean for schools?
As teachers, we should with immediate effect think carefully about what kind of (homework) assignments we set. Tasks like "Write an essay about Martin Luther" can be dispensed with because they will be delivered quickly by an AI in the future. It is even possible to specify keywords, that is, things that should appear in the essay, or it is possible to specify that the AI should write in the style of a primary school pupil or academically. Such texts, which correspond to knowledge queries, can usually be written without errors by AIs.
In principle, however, pupils should continue to learn how to formulate texts themselves. Teachers therefore need to be creative in the exact formulation of tasks, or they can shift assignments to the classroom. In upper school, pupils could be given the task to compare and evaluate their own texts with the results of the AI, or to check the information given by the AI. In this way, Chat-GPT is constructively brought into the classroom.
The situation is different in lower and middle school. Here many texts connect directly to the pupils' experience. An AI is no help in science when it comes to describing an experiment that was done with the class as accurately as possible and making a drawing of the experiment. And that is exactly what these lessons are primarily about: the precise observation of what is happening. How exactly does the fire burn, when does it smoke, what is hotter, the flames or the embers? Afterwards, when writing down our thoughts, we practise sorting them and putting them down on paper in such a way that it becomes clear to the reader how the experiment went. In other lessons, teachers should be aware that middle school pupils will also like to use these tools, and will look with amusement at those teachers who do not notice this.
At home, it may have been your experience as parents that your children do their homework with the help of an AI. Here it is helpful to have the homework shown to you from time to time and to talk to the child about where they obtained their information from.
It is a challenge for educators to teach children, young people and also ourselves that AI systems also give wrong answers. When dealing with Wikipedia, it is worthwhile on occasion to take a look with the children at the discussion page that belongs to each entry. (All you have to do is click on the Talk link at the top left). Suddenly it becomes visible to users that the supposedly set-in-stone knowledge of Wikipedia is being discussed by different authors and that there can be very different views on the subject.
This is not possible with an AI. The answers appear as definitive statements that are not open to discussion. If in the example below you do not know that the two people mentioned have nothing to do with mathematics, you will believe it because the answer itself seems plausible.
There are also known cases in which the AI claims that the user's computer has a virus or that the user has made a mistake in the year and insists on a wrong answer.
This is amusing to read at first, but in the future it will probably be the case that people will rely on the statements of artificial intelligence in the same way that people rely on their sat nav today, not trusting their common sense, which could tell them that it is better not to drive on a dirt road through the forest.
Today programmes are on offer that make users think they are chatting with their best friend. Soulmate: Your AI Companion is such a programme, simulating a soulmate who always has time and supposedly listens empathetically. This is a dangerous development on several levels. On the one hand, the users consider their counterpart to be a human being to whom they can turn also with their worries, even though they may know in the abstract that it is a machine. A whole range of ethical questions immediately arise here, not least the question of who is responsible when the AI gives advice. On the other hand, we have to ask ourselves who has access to these very personal records, or how much value is placed on data protection and rights of personality.
There is no doubt that we are currently on the threshold of a new type of interaction between humans and machines. Waldorf schools can try to make their contribution to ensuring that this interaction remains free and self-determined and that we humans do not become completely dependent. Or in other words, that we use the sat nav where it makes sense, but do not lose the ability to orientate and decide for ourselves.
An example of completely wrong answers by AI: Henning Köhler and Claus-Peter Röh have both published in the field of Waldorf education, not, however, in the field of mathematics. Moreover, the two books attributed to them do not exist, something that could easily have been found out by searching library catalogues. This is where the statistical nature of ChatGPT becomes apparent, the answers are assembled in a way that seems likely, but serious errors can occur.
Question for ChatGPT (on 2 March 2023):
Which author is there on the subject of "mathematics in Waldorf schools"?
There are various authors who deal with the importance of mathematics in Waldorf education. One of the best known is probably Rudolf Steiner, the founder of Waldorf education. In his writings and lectures he goes into detail about the role of mathematics in Waldorf schools, also emphasising the importance of geometry, arithmetic and algebra.
Other authors who have dealt with this topic are, for example, Henning Köhler, who gives an introduction to the mathematics curriculum in Waldorf schools in his book Mathematik in der Waldorfschule (Mathematics in Waldorf Schools), and Claus-Peter Röh, who focuses on teaching the basics of arithmetic at Waldorf schools in his book Mathematik in der Waldorfschule – Die Arithmetik (Mathematics in Waldorf Schools – Arithmetic).
However, there are also numerous other books and publications by Waldorf teachers and educators that deal with the importance of mathematics in Waldorf schools.