The Risks of Generative AI for kids

Over the past few years, generative AI has rapidly evolved, transforming how machines interact with and understand humans.  The arrival of OpenAI’s ChatGPT has put a spotlight on Generative AI (Gen-AI) . It’s important to understand that there is a difference between AI and Gen AI.

The Differences in AI

Traditional AI (artificial intelligence) follows a set of rules to help with specific tasks like answering questions or giving recommendations, but it doesn’t create new content. An example of traditional AI are chatbots. 

Generative AI (gen AI) is artificial intelligence that creates original content—like writing stories, generating artwork, or composing music—based on the data it’s learned from. An example of Generative AI is ChatGPT.

risks of gen AI

Teens and AI

Popular gen AI tools that teens are already using include ChatGPT, Google’s Gemini,  DALL-E, to create images, and Snapchat’s My AI, which produces text responses. Many companies are introducing AI into the services that they offer. An example of this is Khan Academy. Khan Academy has introduced a personal tutor in its platform that was honed from a general-purpose chatbot to one particularly strong in learning conversations.  

7 in 10 teenagers in the United States have used generative AI tools, according to a report published today by Common Sense Media.

The Risks of AI

  1. Most gen AI tools require users to be 13 or older, but they typically don’t have a good way of confirming a user’s age. These tools generally lack specific parental controls.
  2. Kids are using AI without parents’ or teachers’ knowledge for school assignments or to learn new information which can be a cause for concern.
  3. AI can spread persuasive disinformation and harmful and illegal content at a lager scale and lower cost. Generative AI can instantly create text-based disinformation that is indistinguishable from, and more persuasive in, swaying people’s opinion than human-generated content.
  4. As they’ve done with numerous other technologies, bad actors may exploiting generative AI models to perpetrate, proliferate, and further cause sexual harm against children. Examples of this include:
    • Using generative AI, perpetrators may adapt original child sexual abuse material (CSAM) images and videos into new abuse material, or manipulate benign material of children into sexualized content.
    • Generative AI text models allow bad actors to rapidly scale child grooming conversations, as well as the sextortion of child victims, often carried out by organized crime.
    • A recent FBI alert noted an uptick in reporting of sextortion, including minors, using AI-generated images. Bad actors use existing photos (for example, from social media accounts) to generate explicit, sexually themed photos (i.e., “deepfakes”) with which to harass and blackmail victims. 
    • Reports of other types of scams are also emerging. Synthetic voice models are being used to con victims by impersonating real people, such as relatives, and requesting money.
  5. Children are particularly vulnerable to the risks of mis/disinformation and a more uncertain, corroded, and harmful disinformation ecosystem is of great concern. Research indicates they may influence children’s perceptions and attributions of intelligence, their cognitive development, and social behavior – especially during different developmental stages.
  6. Bullying and Harmful Behavior – Kids can misuse gen AI to create fake or harmful content, like deepfakes or mean messages, and to tease or bully others. 
  7. When generative AI confidently makes up false information, what impact does this have on children’s understanding and education, especially if they become increasingly reliant on chat-enabled tools? Generative AI has produced dangerous content such as violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying and hate speech.
  8. Amazon Alexa advised a child to stick a coin in an electrical socket. Snapchat’s AI friend gave inappropriate advice to reporters posing as children.
  9. More broadly, as children interact with generative AI systems and share their personal; and data in conversation and interactions, what are the implications for children’s privacy?

Examples:

1. *inappropriate Content: 

   – A child uses a GenAI-powered chatbot for homework help, but the bot inadvertently generates or shares violent or explicit content due to insufficient content moderation.

2. Misinformation:

   – A child searches for medical advice using an AI tool and receives inaccurate health information that could lead to improper treatment or unnecessary worry.

3. Manipulation and Exploitation:  

– A predator uses GenAI to generate fake messages that mimic another child, tricking a young user into sharing personal information or meeting in person.

4. Privacy Concerns:

   – A child interacts with a GenAI app that requires personal information, but the data is not securely stored or is shared without proper consent, leading to privacy breaches.

Tips for Talking with Kids About Gen AI

  • Explore the tools your child is using. Show them how to ask good questions or learn something new. For example, if they are working on writing an email, guide them on how to take important notes and rephrase the content.
  • Explain to your child that the information isn’t always based on accurate information and facts, and its based on the internet. That’s why they always need to check in a trusted knowledge base (books, articles, websites).  You can also check it together with them.
  • Make sure your child knows they should come to you if they encounter anything disturbing, such as harmful advice or inappropriate images. 
  • Explain to your child that they should’t share private information while using these AI tools ( like phone number, credit, full name, home address) since the information can be misused by bad actors.
  • Warn your child about people pretending to be their friends online but asking them to behave strangely, such as sending inappropriate photos or sharing sensitive information. If this happens, they should come to you immediately—it might be someone trying to impersonate a friend.