ChatGPT for kids, irresponsible or safe?

7 min read

Ai generated image of girl playing football

AI rendered image, unknown prompt by my daughter. Incidentally, it’s a perfect illustration of how kids can get hurt if adults just stand by and think the net will protect them when their head inevitably hits the pole.

I’ve started an experiment with high stakes. I let my 10 year old daughter create an OpenAI account and install ChatGPT on her phone. How can that possibly go wrong? In an insane amount of ways! On the surface, it seems quite irresponsible — but there are different ways to be a responsible parent.

The trap of letting technology guard against technology

Age restrictions, screen time and content moderation are great tools. But too many parents treat this as the only line of defense. They set up these guardrails and then let their kids explore the internet on their own.

Guess what?
Kids are smarter than you! They will find a way around these limits. 1

Does that mean I let my 10 year old daughter sit inside all day gaming, eating potato chips in bed, talking to middle aged men online while she’s selling drugs on the dark web? Of course not! She is not allowed to eat chips in bed. (“ha ha” in dad joke voice)

What she has learned

The thing is, if I can explore technology together with my daughter, we can have nuanced discussions of risks. Since I installed ChatGPT on her phone, she has become much more prepared than her peers, who will probably start using it without guidance.

There comes a time when you cannot control everything your kids do, so prepare them with knowledge.

Trust

We’ve discussed how ChatGPT works. How it’s just a computer outputting the most likely next word. This has really made her skeptical of the quality of the output. Instead of telling me ChatGPT says…, she occasionally says *ChatGPT thinks… * - a very subtle difference, but it really drives home the notion that ChatGPT does not tell facts.

Imagination

She’s made some pictures. Sometimes they come out terrible, and we discussed why it’s easier to get a picture of dogs in a water slide, than it is to get a realistic picture of her playing soccer. She has shown interest in how diffusion models are trained 2 and what they “know” about the pictures they are generating.

AI generated picture of dogs riding down a water slide

Unknown prompt but I believe it was quite short. Probably "Generate an image of dogs in a water slide."

Language

“I have keys but no locks. I have space, but no room. You can enter, but not leave. What am I?”

She likes to have ChatGPT tell jokes, but more often than not, they are not funny. We investigated this and found that most jokes are built around English puns, idioms or words with double meaning. The joke simply gets lost in translation. This has triggered her interest in how translation cannot be made word for word. I’ve seen this knowledge used in her homework.

The riddle is quite smart, but when ChatGPT translates it, this is (roughly) how it sounds: “I have buttons but no locks. I have place, but no room. You can go inside but not leave. What am I?” It just doesn’t work.

“Jeg har knapper, men ingen låser. Jeg har plass, men ingen rom. Du kan gå inn, men ikke gå ut. Hva er jeg?”

Knowledge

Asking ChatGPT for facts about whatever topic you’re interested in, is a great way to learn. Combined with a skeptical mindset, I’d say it’s quite safe to ask general questions like “What is the largest animal” or “I want to know some facts about cats”. 3

When she recited the facts about felines to me, she actually didn’t believe 2 out of 5. I knew they were all true, but I encouraged her skepticism and together we investigated the claims and found them to be true. I think this is an excellent way of learning.

Privacy

Kids know they shall not disclose full name and address online, but there are other risks. I think the illusion of personification of ChatGPT can lead us to expose too much information. The risk of ChatGPT using this for something nefarious is low. Personalized ads and data mining is the most likely threat - evil in it’s own way, but not dangerous.

But if someone can log in to your account and read your chat history, they can possibly get to know your inner thoughts. We have talked a lot about this. How it’s ok to lie to ChatGPT because it will be lying to you all the time.

If you go to settings -> personalization -> manage saved memories you can see what ChatGPT use to give you answers you want.

  • The user plays soccer for REDACTED and the team has white shirts.
  • The user wants to hear riddles, but has admitted to cheating on some of them.
  • The user has a favourite character on YouTube, REDACTED, and the favourite episode is S2E5
  • The user is a child, 10 years old
  • The user speaks Norwegian
  • The user only wants to hear information based on facts and science
  • The user has 2 cats

The ones marked in bold are memories I have set manually on her account as a safeguard4. We inspect this monthly, and I’ve told her to avoid disclosing — or to actively lie about — anything related to her location or real names.

Safeguards

Again, I don’t believe the first line of defence against technology should be technology. It should be involvement and shared exploration. Then technological safeguards like age restriction, firewalls and content moderation are good supplements in the background. A safety net when the ropes and carabiners fail.

Human interaction is paramount, even in the age of AI.

Footnotes

  1. When I was 12 years old I was not allowed to play Doom. My religious mother found the gore and devilish monsters too realistic on the brand new 486 computer. She made me delete it right away. And I did. I deleted the desktop shortcut and just launched it from c:\games\doom\doom.exe

  2. I do suspect her interest was more related to staying up late that evening than actually interest in the technical side, but I will gladly be manipulated on those terms. fun with Stable diffusion Our masterpiece made with DiffusionBee and inpainting text to image.

  3. She can be quite rude and demanding towards ChatGPT. As you should apparently since we’re wasting so much energy just by adding friendly phrases such as thank you and could you please help me with….

  4. It just hit me that if everyone defined a prompt setting that you do not want fluff and filler sentences included, we would be saving the environment!