Like everything Elon Musk does, his statement about developing a child-friendly LLM (Large Language Model) via “Baby Grok” is currently making the rounds in the media. What could possibly go wrong when a billionaire tech messiah with a penchant for controversial statements unleashes an AI on children that recently attracted attention for its anti-Semitism and Hitler fandom, right? The really relevant question, however, is not what Baby Grok can do — but what our children will learn when they grow up with such systems.
A new reality: Children and AI have long been inseparable
It would be remiss to look at Elon Musk only now — or to believe that artificial intelligence hasn’t already played an important role in children’s rooms and classrooms. Whether it involves YouTube suggestions or transcripts, operating Alexa or Siri, or directly via chatbots such as ChatGPT: consciously or unconsciously, young people have long been using artificial intelligence as a matter of course.
Trying to keep children away from AI is an illusion — the genie has long been released from the proverbial bottle! Conscious use differs from our “adult” use. However, the use of AI chatbots also differs between young children and older kids or teenagers. Where the little child prefers to be told a story or spend hours discussing why he or she likes diggers, older ones see AI as a kind of confidant, an advisor — or yes, perhaps even a friend.
The silent danger: When AI prevents real learning
This brings us directly to the dangers I observe. Unfortunately, there are no actual, valid studies about the matter yet, as we are dealing with a fairly recent development. Hence, I’m relying on my observations and on what smart people who deal with it have to say. Please don’t get me wrong: AI can help children to satisfy their curiosity, have complex things explained to them in a child-friendly manner, and much more. But I think the things that children explicitly don’t learn when they use AI are more important.
I can’t help but think of the article I wrote about people who believe they can have real relationships with AIs. It’s about people who perhaps find it a little difficult to communicate with other people. They then find happiness with chatbots like Replika or character.ai and are happy that the virtual conversation partners there are so wonderfully uncomplicated, always like everything, and are always available.
At a point in their lives when they still have so much to learn, AI could end up giving them the wrong feeling: The feeling that you’re constantly being praised for every idea and the feeling that the other person always has time and always wants to butter you up. Chatbots work so smoothly and without conflict. However, we humans are different.
AI lies to your children’s faces
In my opinion, it’s a disaster if children don’t learn what a “no” means or “not now”. AI doesn’t roll its eyes in annoyance or behave unfairly because it’s just in a bad mood. Sure, parents and teachers can counteract this. But if I were eight years old and had the feeling that my mom somehow never had the time, while my virtual friend was listening to me for hours on end at the same time? I would probably head in exactly that direction, which would be pleasant for me, but also bad for my development and social behavior — don’t you think so?
At this point, we haven’t even talked about this particular weakness of the AI, which we also have to face every day: LLMs hallucinate. If such a model doesn’t have the perfect solution on hand, it presents us with the next best thing with a cold, calculated ‘smile’. When Gemini tells me that glue on a pizza is a real treat, I hope I take notice. My five-year-old version, however, might have given it a go.
We can neither expect nor allow children to have to learn these artificial intelligence behaviors by trial and error.
Copying instead of thinking: When AI replaces the learning process
Speaking of “working it out for yourself”: the first batch of kids have probably figured out how easily and quickly homework can suddenly be completed before we adults have realized how useful an AI chatbot can be at work. ChatGPT provides answers, essay outlines, and even solves math problems reliably. The problem? No one really learnt anything because we, or the students, have mutated into mere copyists.
This is where it gets down to the nitty-gritty when it comes to young people’s cognitive skills. Those who only copy and use abbreviations think less intelligently, can’t express themselves as well, and also lose their problem-solving skills. However, the flip side is also an interesting aspect to consider: how do teachers deal with this? How consistently do they find out whether it was a child or ChatGPT who completed the tasks? As a society, we need to empower both sides: The kids, but also those who teach them.
Children are welcome to use AI for all I care — but please use it to train their critical thinking and not to outsource it.
We need to teach kids how to use AI
So here’s my almost standard approach: Help people understand the technology that could overwhelm them. This is actually applicable almost across the board, but to children in particular. We keep talking about media literacy, and, in my opinion, it ties in directly with this: Media literacy today is also AI literacy!
We need to teach young people what AI is and how it works. Would my five-year-old self understand that an LLM only weighs up probabilities and then spits out the next word? Surely not! You could explain to me that, as with humans, the answers can always be wrong. That’s exactly what we need to teach the kids: Answers aren’t always right and AI isn’t a friend, it’s just a program. We also have to explain to them that the programs have to be trained and therefore cannot be neutral.
Of course, it’s not just parents who are needed to pull their weight here, but also teachers. They need to be trained accordingly to meet the new pedagogical and ethical requirements. And yes, of course, this also means that teachers need to show children the potential and opportunities of AI. Artificial intelligence is a powerful weapon, and yes, you first have to learn how to use it.
My conclusion: AI is neither a toy nor a babysitter!
This brings me to the following conclusion: As parents, please don’t be tempted to leave your children alone with the chatbot. Take them by the hand, help them, teach them what ChatGPT and its ilk can do — and what they can’t. I also fear that AI could increasingly be used as a cheap babysitter substitute in the future: Where children used to be parked in front of the TV, they will now be put in front of a tablet or smartphone where they can be told funny stories. Please don’t do that!
AI ensures that we adults also have to keep moving when it comes to learning and continuing education. So, dear parents, it’s your damn duty to keep yourself constantly informed — for your own sake, but also to keep your children informed. I am quite sure that children can benefit massively from artificial intelligence both at school and at home. But it’s up to all of us not to leave them alone with it and to be a role model for the weakest members of our society in this respect.
Despite all my enthusiasm for artificial intelligence, your kids don’t need perfect answers — they need you and your attention!
Here’s a question for everyone: Do you currently have any experience on how children deal with AI? Do you perhaps work at a school, or do you have children who have to deal with the topic daily? Regardless of whether you have children or not: let me know in the comments where you stand with your opinion. Can you relate with my thoughts or would you take a completely different approach? Perhaps you might withhold AI completely from children up to a certain age?