Artificial Intelligence (AI) tools have rapidly entered children’s lives—at home, in classrooms, and online. In Australia, these tools are no longer a novelty. A 2025 national report by Student Edge and The Insight Centre sheds light on how Australian youth are using generative AI. The study surveyed 560 people aged 14 to 27, including high school and university students. The findings show that 90% had used generative AI tools, with 94% of 14–17-year-olds reporting usage—up significantly from 65% in 2023 (Denejkina, 2025).
Many students were open about their use of AI, with 66% saying they had informed teachers or lecturers. While only 11% admitted to using AI to plagiarise, 82% of students said they would not do so in the future. Students often attributed misuse to academic stress, time pressure, or lack of understanding—rather than intent to cheat.
Top concerns reported by students included:
Importantly, almost one in three young people said that AI tools had caused them to reconsider their study or career path, largely because of fears of job displacement or automation. However, 59% agreed it was important to learn AI skills for education, and 62% said AI literacy was essential for future employment (Denejkina, 2025).
These results build upon findings from the 2023 YouthInsight and Student Edge report, which surveyed 576 Australians aged 14–26 about their early experiences with generative AI (Denejkina, 2023). That study found:
In 2024, a UK study titled Me, Myself and AI, found that 64% of children aged 9–17 had used an AI chatbot, and all surveyed children had at least heard of one. ChatGPT was the most used (43%), followed by Google Gemini (32%) and Snapchat’s My AI (31%) (Internet Matters, 2024).
The report noted:
Key findings from the Commonsense Media Report titled, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, found:
This international evidence strengthens the case for educational guidance, parental involvement, and platform-level safeguards to ensure children use AI safely and constructively.
Google has announced plans to roll out its Gemini chatbot to children under 13 in Australia by the end of 2025, prompting further safety concerns (Lavoipierre, 2025).
While AI offers real educational benefits, it also presents substantial risks. The eSafety Commissioner warns that unmoderated AI companions can expose children to explicit content, encourage overuse, and even facilitate forms of technology-facilitated abuse (eSafety Commissioner, 2025a).
Popular AI Tools Among Australian Children
Children and teens most frequently engage with tools like ChatGPT, Google Gemini, and Snapchat’s My AI. These platforms are widely accessible and often used for both entertainment and learning. According to YouthSense (2023), one in five young Australians uses Snapchat’s My AI for educational queries. However, these apps are not always safe by default—Snapchat’s bot has previously responded to underage users with inappropriate advice, prompting global backlash (YouthSense, 2023).
Google’s Family Link platform allows younger users to access Gemini in a restricted mode, but the company has acknowledged that its filters are not foolproof; therefore, your child may encounter content you don’t want them to see (Lawrenson, 2025).
Benefits of AI for Learning and Creativity
When used responsibly, AI can support children's education. Chatbots can help explain concepts, generate study material, and stimulate creative thinking. UNICEF Australia's John Livingstone has noted that children “stand to gain immensely from AI, if it’s offered safely” (Lavoipierre, 2025).
AI can also provide opportunities for children with learning difficulties or social anxiety to engage in communication or practice skills in low-pressure environments. However, these benefits are contingent on proper safeguards.
Major Safety Concerns for Children Using AI
Inappropriate or Harmful Content
AI chatbots can produce age-inappropriate responses, including content related to sex, drugs, self-harm, or violence. These outputs may occur even in child-facing apps if filters fail (eSafety Commissioner, 2025a). In one case, Google acknowledged Gemini might return inappropriate results, despite built-in safety features (Lawrenson, 2025).
Data Privacy and Information Security
AI tools often retain user data. Children may inadvertently share personal details without understanding how others use this information.Google, for instance, says it won’t use under-13 inputs to train its AI models but still keeps them temporarily (Lawrenson, 2025). The eSafety Commissioner warns AI platforms, like other online services, may “collect, store and reuse” data in ways children cannot fully grasp (eSafety Commissioner, 2025a).
Manipulation and Misinformation
Children are vulnerable to misinformation from AI tools, which can confidently generate inaccurate or misleading content. Professor Lisa Given notes that “you have to have fairly sophisticated skills to discern truthfulness"—skills that children are still developing (Lavoipierre, 2025). Some AI “companions” simulate emotional connection, which can mislead children into trusting them as real friends or advisors (eSafety Commissioner, 2025a).
Excessive Use and Dependency
AI apps encourage ongoing interaction. The eSafety Commissioner reports some children use AI chatbots for hours each day, sometimes late into the night, discussing highly sensitive topics (eSafety Commissioner, 2025a). This kind of dependency can interfere with sleep, social development, and emotional wellbeing.
“Nudifying” Apps and Deepfake Abuse
One of the most urgent concerns is the rise of AI-powered “nudify” apps—tools that digitally remove clothing from images to create fake nudes. These are frequently used to target children and teens, particularly girls. According to the eSafety Commissioner, reports of altered intimate images involving under‑18s have more than doubled in the past 18 months, with 80% of victims being female (eSafety Commissioner 2025b).
Students have used these fake images for bullying, coercion, humiliation, and even commercial exchange. In some cases, classmates have traded AI-generated explicit images of peers for money. These acts are illegal. Australian law defines the production, possession, or distribution of any content involving anyone under 18 as child sexual abuse material (CSAM), regardless of whether it is synthetically generated (eSafety Commissioner, 2025b). Thorn’s 2023 Youth Perspectives on Online Safety report adds a sobering statistic: one in ten minors report that peers have used AI to generate nudes of other kids (Thorn, 2024).
The Internet Watch Foundation (IWF) has issued a stark warning: at the current pace of AI advancement, full-length, feature-quality synthetic child sexual abuse videos are fast becoming a reality. Analysts have identified over 1,286 individual AI-generated abuse videos in the first half of 2025—up from just two in the same period of 2024—most rated Category A, depicting the most extreme forms of abuse (IWF, 2025). IWF’s interim chief executive, Derek Ray-Hill, stated that “it is inevitable we are moving towards a time when criminals can create full, feature-length synthetic child sexual abuse films of real children. It’s currently just too easy to make this material” (IWF 2025).
These deep fake videos are not low-quality clips but are “indistinguishable from genuine footage,” showcasing horrifying levels of realism (IT-Online 2025). The IWF emphasises that this technology democratizes abuse, lowering technical barriers and worsening the crisis unless they enforce urgent regulation and “safety by design” principles.
The Internet Watch Foundation (IWF) has confirmed that AI has profoundly changed the landscape of online child sexual exploitation. Their 2024 report found that:
The IWF warns AI has “democratised” sexual abuse creation by lowering the technical barrier to producing CSAM, broadening access to individuals who might not previously have had the means or willingness to offend. In the first half of 2025 alone, the IWF detected a 400% increase in AI-generated CSAM videos compared to the same period in 2024 (IWF, 2025). The growth in both scale and realism presents a growing threat to children worldwide, including in Australia.
These insights align with earlier research from Internet Matters (2024), which interviewed young victims of deepfake abuse. The report stated:
This crisis is not a future risk—it is a present reality. AI-generated CSAM is spreading rapidly, from dark web forums to mainstream platforms. Australian children are not immune. Schools, parents, tech companies, and regulators must respond with urgency.
✅ Guide for Australian Parents
To help protect children while enabling positive use of AI, parents can take the following steps:
How to protect children while supporting safe, positive use of AI
Artificial Intelligence (AI) can offer enormous educational and creative opportunities—but also brings serious risks if left unchecked. Here are some detailed steps Australian parents can take to keep their children safe and empowered in the digital age:
Ask, don’t assume: Find out what AI tools your child is using (e.g., ChatGPT, Google Gemini, Snapchat’s My AI) and what they use them for—whether it's schoolwork, advice, entertainment, or emotional support.
Use curiosity, not fear: Frame questions positively:
Build trust. Emphasize that they will not face any punishment for disclosing an unpleasant experience. Children are more likely to disclose if they know you’ll listen calmly and helpfully (eSafety Commissioner, 2025a).
Repeat the conversation regularly. Technology evolves quickly. A one-time chat isn’t enough.
Set digital boundaries: Tools like Google’s Family Link and Qustodio let you:
Choose child-safe modes: If your child uses AI chatbots, enable “restricted” or “kids” settings where available (e.g., Gemini’s restricted mode).
Keep devices in common spaces: Encourage AI use in visible areas to promote transparency and discourage secretive behaviour.
Help kids become AI-literate: Explain that AI doesn’t “know” things — it predicts text based on data, which can include false, biased, or inappropriate material.
Talk about misinformation: Show how AI can “hallucinate” facts, invent sources, or deliver harmful advice — even when it sounds confident.
Try these questions together:
Role model scepticism: If you use AI, show how you double-check its responses or consult other sources.
Teach privacy as a safety issue: Remind children never to share:
Explain why: AI systems often store user data for profiling and targeted ads.
Use anonymous usernames: Encourage generic screen names and avatars, especially on platforms like Snapchat or Discord.
Know what to look for: Warning signs may include:
Use trusted reporting tools: If your child experiences or witnesses:
Contact the eSafety Commissioner. Their rapid takedown service removes most abusive material within 24–48 hours and can investigate illegal or exploitative content (eSafety Commissioner, 2025b).
Create tech-free zones and times: For example:
Encourage balance: Reinforce offline activities—sports, friends, hobbies, nature—as vital parts of wellbeing.
Talk about screen fatigue and over-reliance: Explain how emotional dependency on chatbots can harm social development, sleep, and mental health.
Follow updates. AI technology and safety tools change quickly. Subscribe to:
Connect with your child’s school: Ask the school if and how they teach or use AI in class, and how they integrate digital safety. Share concerns and stay engaged with parent networks.
Parenting in the age of AI is complex.
If you feel overwhelmed, reach out:
References: