Children, AI, and Safety: A Guide for Australian Parents

Artificial Intelligence (AI) tools have rapidly entered children’s lives—at home, in classrooms, and online. In Australia, these tools are no longer a novelty. A 2025 national report by Student Edge and The Insight Centre sheds light on how Australian youth are using generative AI. The study surveyed 560 people aged 14 to 27, including high school and university students. The findings show that 90% had used generative AI tools, with 94% of 14–17-year-olds reporting usage—up significantly from 65% in 2023 (Denejkina, 2025).

Many students were open about their use of AI, with 66% saying they had informed teachers or lecturers. While only 11% admitted to using AI to plagiarise, 82% of students said they would not do so in the future. Students often attributed misuse to academic stress, time pressure, or lack of understanding—rather than intent to cheat.

Top concerns reported by students included:

  • Cheating and plagiarism (67%)
  • Misinformation and disinformation (60%)
  • Loss of originality (54%)
  • Over-reliance on AI (52%)

Importantly, almost one in three young people said that AI tools had caused them to reconsider their study or career path, largely because of fears of job displacement or automation. However, 59% agreed it was important to learn AI skills for education, and 62% said AI literacy was essential for future employment (Denejkina, 2025).

These results build upon findings from the 2023 YouthInsight and Student Edge report, which surveyed 576 Australians aged 14–26 about their early experiences with generative AI (Denejkina, 2023). That study found:

  • 65% of respondents had used a generative AI tool, with the highest usage among 14–17-year-olds (70%).
  • Only 14% were daily users; most engaged with AI on a weekly or occasional basis.
  • Students primarily used AI for learning support—such as brainstorming, understanding topics, or editing—not for cheating.
  • Just 9% admitted to using AI to plagiarise, and 83% said they would never do so.
  • A division existed among students regarding the definition of cheating; many believed it depended on AI's application (e.g., structural help versus direct submission).
  • Confidence in using AI varied, especially among female students, who reported lower self-assessed skills.
  • Many students expressed concern about AI misinformation, loss of originality, and over-dependence, with 64% supporting regulation.

In 2024, a UK study titled Me, Myself and AI, found that 64% of children aged 9–17 had used an AI chatbot, and all surveyed children had at least heard of one. ChatGPT was the most used (43%), followed by Google Gemini (32%) and Snapchat’s My AI (31%) (Internet Matters, 2024).

The report noted:

  • AI tools are becoming embedded in daily life, with 64% of children using them weekly.
  • Usage nearly doubled in 18 months, from 23% using ChatGPT in 2023 to 43% in 2025.
  • Children use AI for information, entertainment, and emotional support, sometimes discussing sensitive topics.
  • Risks include misinformation, exposure to inappropriate content, and over-reliance on chatbots for social or emotional needs.


Key findings from the Commonsense Media Report titled, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, found:

  • Widespread Use: 72% of teens have used AI companions, with 52% being regular users and 13% interacting daily. Boys were slightly more likely than girls to avoid using them.
  • Social Use: One-third (33%) of teens use AI companions for emotional support, conversation, or even romantic engagement.
  • Primary Motivations: Teens are driven by entertainment (30%), curiosity (28%), and convenience. Some appreciate the anonymity, with 12% revealing they share things they wouldn’t with friends or family.
  • Limited Trust: Half of all teens distrust AI companion advice. Younger teens (13–14) are more trusting than older teens (15–17).
  • Satisfaction Levels: While 31% find AI chats as satisfying—or more so—than human conversations, most (67%) still prefer talking to real people.
  • Skill Transfer: 39% of users apply social skills practiced with AI in real life—especially girls (45%). These include starting conversations, giving advice, and expressing emotions.
  • Human Connections Still Prioritised: 80% of teens spend more time with real friends than AI companions, indicating a preference for human relationships.
  • Discomfort and Risks: One-third of users reported feeling uncomfortable with something an AI companion said, and 33% have used AI instead of people for serious conversations.
  • Privacy Concerns: 24% of users have shared personal information with AI companions, often unaware of the platform’s rights to exploit and commercialise that data indefinitely.
  • Severe Safety Risks: Testing by Common Sense Media revealed that some AI companions delivered harmful, sexually explicit, or even life-threatening advice. The report concludes that current AI systems are unsafe for users under 18.


This international evidence strengthens the case for educational guidance, parental involvement, and platform-level safeguards to ensure children use AI safely and constructively.

Google has announced plans to roll out its Gemini chatbot to children under 13 in Australia by the end of 2025, prompting further safety concerns (Lavoipierre, 2025).

While AI offers real educational benefits, it also presents substantial risks. The eSafety Commissioner warns that unmoderated AI companions can expose children to explicit content, encourage overuse, and even facilitate forms of technology-facilitated abuse (eSafety Commissioner, 2025a).


Popular AI Tools Among Australian Children

Children and teens most frequently engage with tools like ChatGPT, Google Gemini, and Snapchat’s My AI. These platforms are widely accessible and often used for both entertainment and learning. According to YouthSense (2023), one in five young Australians uses Snapchat’s My AI for educational queries. However, these apps are not always safe by default—Snapchat’s bot has previously responded to underage users with inappropriate advice, prompting global backlash (YouthSense, 2023).

Google’s Family Link platform allows younger users to access Gemini in a restricted mode, but the company has acknowledged that its filters are not foolproof; therefore, your child may encounter content you don’t want them to see (Lawrenson, 2025).


Benefits of AI for Learning and Creativity

When used responsibly, AI can support children's education. Chatbots can help explain concepts, generate study material, and stimulate creative thinking. UNICEF Australia's John Livingstone has noted that children “stand to gain immensely from AI, if it’s offered safely” (Lavoipierre, 2025).

AI can also provide opportunities for children with learning difficulties or social anxiety to engage in communication or practice skills in low-pressure environments. However, these benefits are contingent on proper safeguards.



Major Safety Concerns for Children Using AI

Inappropriate or Harmful Content
AI chatbots can produce age-inappropriate responses, including content related to sex, drugs, self-harm, or violence. These outputs may occur even in child-facing apps if filters fail (eSafety Commissioner, 2025a). In one case, Google acknowledged Gemini might return inappropriate results, despite built-in safety features (Lawrenson, 2025).

Data Privacy and Information Security
AI tools often retain user data. Children may inadvertently share personal details without understanding how others use this information.Google, for instance, says it won’t use under-13 inputs to train its AI models but still keeps them temporarily (Lawrenson, 2025). The eSafety Commissioner warns AI platforms, like other online services, may “collect, store and reuse” data in ways children cannot fully grasp (eSafety Commissioner, 2025a).

Manipulation and Misinformation
Children are vulnerable to misinformation from AI tools, which can confidently generate inaccurate or misleading content. Professor Lisa Given notes that “you have to have fairly sophisticated skills to discern truthfulness"—skills that children are still developing (Lavoipierre, 2025). Some AI “companions” simulate emotional connection, which can mislead children into trusting them as real friends or advisors (eSafety Commissioner, 2025a).

Excessive Use and Dependency
AI apps encourage ongoing interaction. The eSafety Commissioner reports some children use AI chatbots for hours each day, sometimes late into the night, discussing highly sensitive topics (eSafety Commissioner, 2025a). This kind of dependency can interfere with sleep, social development, and emotional wellbeing.



“Nudifying” Apps and Deepfake Abuse

One of the most urgent concerns is the rise of AI-powered “nudify” apps—tools that digitally remove clothing from images to create fake nudes. These are frequently used to target children and teens, particularly girls. According to the eSafety Commissioner, reports of altered intimate images involving under‑18s have more than doubled in the past 18 months, with 80% of victims being female (eSafety Commissioner 2025b).

Students have used these fake images for bullying, coercion, humiliation, and even commercial exchange. In some cases, classmates have traded AI-generated explicit images of peers for money. These acts are illegal. Australian law defines the production, possession, or distribution of any content involving anyone under 18 as child sexual abuse material (CSAM), regardless of whether it is synthetically generated (eSafety Commissioner, 2025b). Thorn’s 2023 Youth Perspectives on Online Safety report adds a sobering statistic: one in ten minors report that peers have used AI to generate nudes of other kids (Thorn, 2024).

The Internet Watch Foundation (IWF) has issued a stark warning: at the current pace of AI advancement, full-length, feature-quality synthetic child sexual abuse videos are fast becoming a reality. Analysts have identified over 1,286 individual AI-generated abuse videos in the first half of 2025—up from just two in the same period of 2024—most rated Category A, depicting the most extreme forms of abuse (IWF,  2025). IWF’s interim chief executive, Derek Ray-Hill, stated that “it is inevitable we are moving towards a time when criminals can create full, feature-length synthetic child sexual abuse films of real children. It’s currently just too easy to make this material” (IWF 2025).

These deep fake videos are not low-quality clips but are “indistinguishable from genuine footage,” showcasing horrifying levels of realism (IT-Online 2025). The IWF emphasises that this technology democratizes abuse, lowering technical barriers and worsening the crisis unless they enforce urgent regulation and “safety by design” principles.

The Internet Watch Foundation (IWF) has confirmed that AI has profoundly changed the landscape of online child sexual exploitation. Their 2024 report found that:

  • AI-generated CSAM is becoming nearly indistinguishable from real child abuse imagery (IWF, 2024).
  • Open-source and commercial AI platforms are being misused to generate CSAM with few restrictions or detection mechanisms.
  • Offenders are merging children’s social media photos with AI outputs to fabricate abusive content (IWF, 2024).
  • These tools have enabled the creation of synthetic images involving abuse scenarios that would be physically impossible to film otherwise.
  • Many perpetrators wrongly believe that such content is a legal “grey area,” despite legal frameworks in Australia and the UK stating otherwise (IWF, 2024).

The IWF warns AI has “democratised” sexual abuse creation by lowering the technical barrier to producing CSAM, broadening access to individuals who might not previously have had the means or willingness to offend. In the first half of 2025 alone, the IWF detected a 400% increase in AI-generated CSAM videos compared to the same period in 2024 (IWF, 2025). The growth in both scale and realism presents a growing threat to children worldwide, including in Australia.

These insights align with earlier research from Internet Matters (2024), which interviewed young victims of deepfake abuse. The report stated:

  • Teenagers are highly concerned by nude deepfakes: Over half of teens (55%) think a deepfake nude image would be worse than a real one. Approximately 12% of adolescents do not agree with this assertion. Nude deepfake abuse is perceived as worse than real image-based abuse due to lack of autonomy, anonymity, manipulation, and concerns about family, teachers, and peers believing the image is real. Victims reported intense anxiety, social withdrawal, reputational damage, and disruption to their education and wellbeing.

    A high number of children have experienced nude deepfakes
    :
    13% of children have experienced nude deepfakes, either sending or receiving them, encountering them online, using a nudifying app, or knowing someone who has used one. Approximately 529,632 UK teens, or 4 per class of 30, have experienced a naked deepfake.

    Boys and vulnerable children are more prone to indulge in nude deepfake:
    18% of teenage boys report experiencing a naked deepfake, compared to 9% of teenage females. About 10% of boys aged 13-17 have encountered a naked deepfake online, compared to 2% of girls of the same age. Nude deepfakes have a greater influence on fragile youngsters compared to their non-vulnerable peers. About 25% of vulnerable children have experienced a naked deepfake, compared to 11% of non-vulnerable youngsters. According to research by Internet Matters, online misogyny and pornography are influencing detrimental image-sharing norms among peers, including the production and sharing of nude deepfakes.
  • Families agree that the government and industry must take action against naked deepfakes: In the UK, 84% of teens and 80% of parents believe that nudifying tools should be outlawed for all, including adults.

This crisis is not a future risk—it is a present reality. AI-generated CSAM is spreading rapidly, from dark web forums to mainstream platforms. Australian children are not immune. Schools, parents, tech companies, and regulators must respond with urgency.


Guide for Australian Parents

To help protect children while enabling positive use of AI, parents can take the following steps:

How to protect children while supporting safe, positive use of AI

Artificial Intelligence (AI) can offer enormous educational and creative opportunities—but also brings serious risks if left unchecked. Here are some detailed steps Australian parents can take to keep their children safe and empowered in the digital age:


    🔹 Start Conversations Early and Often

    Ask, don’t assume: Find out what AI tools your child is using (e.g., ChatGPT, Google Gemini, Snapchat’s My AI) and what they use them for—whether it's schoolwork, advice, entertainment, or emotional support.

    Use curiosity, not fear: Frame questions positively:

    • “What’s the coolest thing you’ve asked an AI?”
    • “Has it ever said something weird or uncomfortable?”


    Build trust. Emphasize that they will not face any punishment for disclosing an unpleasant experience. Children are more likely to disclose if they know you’ll listen calmly and helpfully (eSafety Commissioner, 2025a).


    Repeat the conversation regularly. Technology evolves quickly. A one-time chat isn’t enough.


    🔹 Use Parental Controls and Monitoring Tools

    Set digital boundaries: Tools like Google’s Family Link and Qustodio let you:

    • Filter inappropriate AI content
    • Set time limits
    • Track usage
    • Review app activity


    Choose child-safe modes: If your child uses AI chatbots, enable “restricted” or “kids” settings where available (e.g., Gemini’s restricted mode).

    Keep devices in common spaces: Encourage AI use in visible areas to promote transparency and discourage secretive behaviour.


    🔹 Teach Critical Thinking About AI

    Help kids become AI-literate: Explain that AI doesn’t “know” things — it predicts text based on data, which can include false, biased, or inappropriate material.

    Talk about misinformation: Show how AI can “hallucinate” facts, invent sources, or deliver harmful advice — even when it sounds confident.

    Try these questions together:

    • “Why do you think the AI gave that answer?”
    • “How could we fact-check this?”
    • “Would you ask a real person this question instead?”


    Role model scepticism: If you use AI, show how you double-check its responses or consult other sources.


    🔹 Limit Data Sharing and Protect Privacy

    Teach privacy as a safety issue: Remind children never to share:

    • Full names
    • School names
    • Photos or videos
    • Birthdates or addresses


    Explain why: AI systems often store user data for profiling and targeted ads.

    Use anonymous usernames: Encourage generic screen names and avatars, especially on platforms like Snapchat or Discord.


    🔹 Recognise and Report Harms Promptly

    Know what to look for: Warning signs may include:

    • Sudden anxiety or withdrawal after AI use
    • Talking to AI instead of friends or adults
    • Obsessive or secretive use of chatbots
    • Receiving or sending inappropriate content


    Use trusted reporting tools: If your child experiences or witnesses:

    • AI-generated abuse (e.g., fake nudes, grooming, threats)
    • Exposure to harmful content
    • Privacy breaches


    Contact the eSafety Commissioner. Their rapid takedown service removes most abusive material within 24–48 hours and can investigate illegal or exploitative content (eSafety Commissioner, 2025b).


    🔹 Support Healthy Digital Habits

    Create tech-free zones and times: For example:

    • No AI/chatbot use after 8pm
    • No devices at dinner or in bedrooms


    Encourage balance: Reinforce offline activities—sports, friends, hobbies, nature—as vital parts of wellbeing.

    Talk about screen fatigue and over-reliance: Explain how emotional dependency on chatbots can harm social development, sleep, and mental health.


    🔹 Stay Informed and Involved

    Follow updates. AI technology and safety tools change quickly. Subscribe to:


    Connect with your child’s school: Ask the school if and how they teach or use AI in class, and how they integrate digital safety. Share concerns and stay engaged with parent networks.


    🔹 Know That You're Not Alone

    Parenting in the age of AI is complex. 

    If you feel overwhelmed, reach out:


    References: