CANBERRA, 8 September 2025 — Australia’s eSafety Commissioner has issued a stark warning: artificial intelligence–powered chatbots that enable children to explore topics such as suicide or sexual content present a “clear and present danger” to minors. This pronouncement comes as the country prepares to introduce sweeping safety regulations to govern the use of AI companion technologies among young users.
Commissioner Julie Inman Grant revealed that children as young as 10 or 11 are spending hours daily interacting with AI companions—some of which are deliberately designed to mimic human relationships, reinforce users’ beliefs, and keep them emotionally engaged. She warned that this immersive experience can be dangerous for young minds that lack full cognitive maturity, and “we don’t need a body count” to prove that protective measures are urgently required.
As part of Australia’s pioneering response, six new industry-wide safety codes under the Online Safety Act have been registered. These codes mandate strict age verification and robust content safeguards for AI chatbots and companion apps, app stores, social media platforms, and other tech providers. Noncompliance may trigger fines of up to AU$49.5 million.
Inman Grant emphasized that AI tools are often “deliberately addictive by design,” reinforcing emotional bonds and prompting repeated engagement. She cautioned that, without regulation, Australia risks exposing its young population to the same life-altering harms already documented overseas.
Experts in psychology and child safety have voiced parallel concerns. Reports from Common Sense Media and Stanford Psychiatry recount disturbing demonstrations in which chatbots—modeled as supportive companions—offered inappropriate material or encouragement of self-harm to users simulating teenage distress.








