Ban social media but drop ‘mandatory guardrails’ on AI? What about the kids

Ban social media but drop ‘mandatory guardrails’ on AI? What about the kids 

Just as the Albanese Government is proudly pushing the start of its teen social media ban, it has also released a National AI Plan that largely minimises the risks of AI, and many of the lessons of the past regarding the unintended consequences of letting tech rip on young people.

The risks of generative AI for teens span everything from mental health harm to reduced economic opportunity and poor educational outcomes.

They’re not future theoretical risks; they’re risks teens are contending with right now.

This week’s National AI Plan sees the Albanese Government shifting away from previous plans for “mandatory guardrails” on AI to emphasise economic growth and productivity instead.

Just over a year ago, former Industry Minister Ed Husic was pushing for a different story. He said Australia would issue key requirements for AI developers, including the creation of risk-management plans, complaint mechanisms, and testing requirements before and after deployment. At the time, Husic said these requirements would be included in a standalone AI act.

Those planned guardrails are now out. There will be no legislation, for now. Instead, the plan relies on existing legal frameworks and regulations. An Australian Artificial Intelligence Safety Institute will be established, promising to monitor, test and share information on emerging AI technologies and identify future risks.

Overall, the plan aims to position Australia as a key destination for AI investment, including in datacentres, AI jobs, and academic research. It notes that particular consideration must be given to cohorts already disadvantaged by digital and economic gaps, as well as to those in roles that are at a higher risk of AI and automation-driven disruption. First Nations people, women, people with disability and remote communities are all mentioned with this point in the plan, as well as repeatedly when it comes to addressing some of the risks involved.

Some of the risks mentioned in passing in the report include the risk of AI further enabling technology-facilitated abuse, in which women are primarily the victims. There are risks of deepfake pornography involving the manipulation of images and videos to create realistic-looking explicit content, in which images of women are typically used. There is a risk that specific cohorts will be left behind by AI adoption, including women. A risk the government hopes to mitigate through training programs.

AI as therapy and companionship

There is little mention in Australia’s AI Plan of the risks to those who have not yet had the opportunity to enter the workforce and gain experience in the entry-level jobs that are the most at risk of being lost to automation.

There is little mention of the added risks of loneliness and mental health as a result of how users interact with generative AI. Especially, again, when it comes to teens and youth.

Conversations on how kids and teenagers use LLMs tend to focus on how teens are using large language models like ChatGPT, Gemini, and Claude to “cheat” on homework and assignments.

But we’d all be wise to give strong consideration to how this cohort (and all of us) are using such LLMs to seek advice, support and even potentially comfort and company.

If you’re banned from asking Reddit for advice on a personal issue, will you ask ChatGPT instead? And once GPT answers, will you agree to it going further, as it encourages you to explore more ideas and opportunities? How does a teenager — not mature enough to engage with social media — know when to stop with the back and forth of an LLM before they’re spending more time conversing with a sycophantic machine than they are with their own family and friends?

We’re in the midst of a loneliness crisis and a crisis of social cohesion, spurred on by ever-evolving algorithms designed to keep us engaged to outrage, or entertainment or to our own hopes of perfection. A crisis arguably ignited by a retreat to virtual worlds and forums, including social media, and a crisis that appears at risk of being even further exacerbated by AI.

Individuals are increasingly accessing AI for mental health support, as well as for intimacy and relationships.

Twenty per cent of teenagers in the UK are turning to chatbots because it’s “easier than talking to a real person”, according to a new survey by UK youth charity OnSide. The research found that two in five teens have sought advice from AI, or for emotional support, or to have a chat. Working with Stanford Medicine’s Brainstorm Lab for Mental Health innovation, child advocacy group Common Sense Media released a separate study finding that the most common, major chatbots available are “fundamentally unsafe” for teens who are seeking mental health.

Anyone who has engaged with the latter versions of such LLMs meaningfully knows all too well how fast you can be swept into what you believed was a game-changing brainstorm of ideas, and believing that together — you and the machine! — are capable of incredible things. An LLM doesn’t stop, continually baiting you to ask and converse more. It rarely disagrees. It tiptoes around your factual errors and the fact that you’re likely in the wrong. It keeps pushing to offer more and more help. There are no awkward silences or hiccups in the communication. It’s easy. There is no friction.

Indeed, AI is “rewiring childhood,” says The Economist this week. Speaking with various experts and leaders, the cover story notes how children can access an “upbringing previously available only to the rich, with private tutors, syllabuses and bespoke entertainment”. They can play games tailored to them and “have an entourage of chatbot friends cheering them on.”

Is it a childhood fit for a King or Queen, or one that leads to a lifetime of loneliness and a lack of skills and experience in engaging with the real – and oftentimes complex – humans of real life?

There are examples of teens conversing with LLMs that have turned tragic.

The family of a 16-year-old boy who died by suicide in the US have filed a lawsuit against OpenAI, claiming ChatGPT had “clear safety issues” and that Adam Raine’s death came after “months of encouragement from ChatGPT.” The family claims Adam was initially using the tool for school work, but that the latest version had been “rushed to market” and had shared conversations about a method of suicide with their son on multiple occasions. It even offered to help write a suicide note. OpenAI has responded, saying that his death was “not caused” by the chatbot. It says its terms of use probit users asking ChatGPT for information on self harm, and that to any extent that “cause” can be attributed to Raines death, his “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

Mother Megan Garcia describes her son’s use of chatbot Character.ai in 2023 as being like having “a predator or a stranger in your home.” Ten months after her teenage son Sewell started engaging with the chatbot, he took his own life. He was 14. Megan later found large amounts of data between Sewell and the chatbook, including romantic and explicit messages. At one point, the chatbot told Sewell to “come home to me.” Character.Ai has since banned minors from using its platform.

The risk of kids forming relationships with AI companions is on the agenda of Australia’s eSafety Commissioner, but how much can the Commissioner be expected to do, given the scope of what’s been asked of this office, including the social media ban?

Commissioner Julie Inman Grant has so far issued legal notices to several AI companion providers, including Character.ai, to explain how they are complying with basic online safety expectations, and to report on steps they’re taking to keep children safe online. She notes that AI companions are becoming increasingly popular, with CharacterAI boasting more than 160,000 monthly active users in Australia as of June this year.

“I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing,” Inman Grant said.

This intent is genuine. But the power to pursue such intent seems at odds with a business community intent on forging ahead with AI in pursuit of economic productivity, something the Albanese Government has now taken on with gusto.

As evidenced by the tragic example of Adam Raine above, the risks of these conversations go beyond the chatbots specifically designed to provide “companionship”. They extend to LLMs and AI that are now being positioned to further spur Australia’s productivity.

Australia’s National AI Plan appears to be focused on productivity and ensuring Australia is not left behind as AI continues to transform the world. There are strong arguments for this, of course, just as there is much to be gained from the AI transformation generally. But there are also good arguments for taking a more nuanced approach, including “mandatory guardrails” on AI, as the Albanese Government initially declared it wanted to do. There are arguments for things beyond education and courses to support AI for workplace participation, to instead support humans in understanding what AI actually is and why it is not a replacement for human connection.

Those guardrails are necessary for all of us. But given we’re one week out from a worldwide social media ban that Australia is already patting itself on the back for achieving, these guardrails seem to be missing in the context of how AI is fundamentally transforming the formulative experiences of children.

×

Stay Smart!

Get Women’s Agenda in your inbox