Ban social media but drop ‘mandatory guardrails’ on AI? What about the kids

Ban social media but drop ‘mandatory guardrails’ on AI? What about the kids 

A week out from the Albanese Government proudly pushing the start of the teen social media ban, it has released a National AI Plan that largely minimises the risks of AI, and the lessons of the past of letting unregulated tech rip, for that very same group.

The risks of generative AI for teens span everything from mental health harm to reduced economic opportunity and poor educational outcomes.

They’re not future theoretical risks; they’re risks teens are toying with right now.

This week’s National AI Plan sees the Albanese Government shifting away from plans for “mandatory guardrails” on AI toward a national plan focused on economic growth and productivity. It comes just over a year since former industry minister Ed Husic said Australia would issue requirements for AI developers that include creating risk-management plans, complaint mechanisms, and testing requirements before and after deployment. They would be included in a standalone AI act.

Those guardrails are out. There will be no legislation, for now. Instead, we have a National AI Plan, which will rely on the existing legal framework and regulations. A $30 million AI safety institute has been promised to help monitor and advise government agencies on AI.

The plan aims to position Australa as a key destination for AI investment, including in datacentres, AI jobs, and academic research.

The AI plan responding to the fast transformations affecting all of us as AI is further deployed, with many risks and opportunities ahead and, as Tim Ayres notes, to focus “on capturing the economic opportunities of AI, sharing the benefits broadly, and keeping Australians safe as technology evolves. He says that AI “should enable workers’ talents, not replace them”.

The AI plan notes that particular consideration must be given to cohorts already disadvantaged by digital and economic gaps, as well as to those in roles that are at a higher risk of AI and automation-driven disruption. First National people, women, people with disability and remote communities are all mentioned with this point, as well as repeatedly when it comes to addressing some of the risks involved.

Some of the risks mentioned in passing in the report include the risk of AI further enabling technology-facilitated abuse, in which women are primarily the victims. There are risks of deepfake pornography involving the manipulation of images and videos to create realistic-looking explicit content, in which images of women are typically used. There are risks of specific cohorts being left behind by AI adoption, including women. A risk the government hopes to mitigate through training programs.

However, there is little mention of risks to those who have not yet had the opportunity to enter the workforce — and gain experience in the entry-level jobs that are the most at risk of being lost to automation.

There is little mention of the added risks of loneliness and mental health as a result of how users interact with generative AI. Especially, again, when it comes to teens and youth.

Conversations on how kids and teenagers use LLMs tend to focus on how teens are using large language models like ChatGPT, Gemini, and Claude to “cheat” on homework and assignments.

But we’d all be wise to give strong consideration to how this cohort (and all of us) are using such LLMs to seek advice, support and even potentially comfort and company.

If you’re banned from asking Reddit for advice on a personal issue, will you ask ChatGPT instead? And once GPT answers, will you ask it to go further as it invites you to explore more ideas and opportunities with it? How does a teenager — not mature enough to engage with social media — know when to stop with the back and forth of an LLM before they’re spending more time conversing with a sycophantic machine than they are with their own family and friends?

We’re in the midst of a loneliness crisis and a crisis of social cohesion, spurred on by ever-evolving algorithms designed to keep us engaged to outrage, or entertainment or to our own hopes of perfection. A crisis arguably ignited by a retreat to virtual worlds and forums, including social media, and a crisis that appears at risk of being even further exacerbated by AI.

Twenty per cent of teenagers in the UK are turning to chatbots because it’s “easier than talking to a real person”, according to a new survey by UK youth charity OnSide. The research found that two in five teens have sought advice from AI, or for emotional support, or to have a chat. Working with Stanford Medicine’s Brainstorm Lab for Mental Health innovation, child advocacy group Common Sense Media have released a separate study finding that the most common, major chatbots available are “fundamentally unsafe” for teens who are seeking mental health.

Individuals are increasingly accessing AI for mental health support, as well as intimacy and relationships.

Anyone who has engaged with the latter versions of such LLMs meaningfully knows all too well how fast you can be swept into what you believed was a game-changing brainstorm of ideas, and believing that together — you and the machine! — are capable of incredible things. An LLM doesn’t stop, continually baiting you to ask and converse more. It rarely disagrees. It tiptoes around your factual errors and the fact that you’re likely in the wrong. It keeps pushing to offer more and more help. There are no awkward silences or hiccups in the communication. It’s easy. There is no friction.
These conversations may mimic what humans are likely to say given the machines is trained on recorded data of almost everything humans have said previously, but they are nothing like human behaviour.

There are examples of teens conversing with LLMs that have turned tragic.

The family of a 16-year-old boy who died by suicide in the US have filed a lawsuit against OpenAI, claiming ChatGPT had “clear safety issues” and that Adam Raine’s death came after “months of encouragement from ChatGPT.” The family claims Adam was initially using the tool for school work, but that the latest version had been “rushed to market” and had shared conversations about a method of suicide with their son on multiple occasions. It even offered to help write a suicide note. OpenAI has responded, saying that his death was “not caused” by the chatbot. Open AI says its terms of use probit users asking ChatGPT for information on self harm, and that to any extent that cause can be attributed to Raines death, his “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

Mother Megan Garcia describes her son’s use of chatbot Character.ai in 2023 as being like having “a predator or a stranger in your home.” Ten months after her teenage son Sewell started engaging with the chatbot, he took his own life. He was 14. Megan later found large amounts of data between Sewell and the chatbook, including romantic and explicit messages. At one point, the chatbot told Sewell to “come home to me.” Character.Ai has since banned minors from using its platform.

The risk of kids forming relationships with AI companions is on the agenda of Australia’s eSafety Commissioner, but how much can the Commissioner be expected to do, given the scope of what’s been asked of this office, including the social media ban?

Commissioner Julie Inman Grant has so far issued legal notices to several AI companion providers – including character.ai, Glimpse.AI, Chub Ai and Chair Research Corp- to explain how they are complying with basic online safety expectations, and to report on steps they’re taking to keep children safe online. She notes that AI companions are becoming increasingly popular, with CharacterAI boasting more than 160,000 monthly active users in Australia as of June this year.

“I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing,” Inman Grant said.

This intent, I believe, is genuine. But the power to pursue such intent seems at odds with a business community intent of forging ahead with AI in pursuit of economic productivity, something the Albanese Government has now taken on with gusto.

But as evidenced by the tragic example of Adam Raine above, the risks of these conversations go beyond the chatbots specifically designed to provide “companionship”. They extend to LLMs and AI that are now being positioned to further spur Australia’s productivity. It suggests we need more than education on how to use AI for workforce participation, but also a greater understanding of what AI actually is.

Meanwhile, young people face a very different entry into the workplace than older generations, with entry-level, white-collar jobs predicted to be the first to go as a result of AI. Indeed, much of this work is already drying up. Employment for younger workers has been drying up in AI-exposed occupations since 2022, according to a recent study.

Australia’s National AI Plan appears to be focused on productivity. It doesn’t want Australia left behind as AI continues to transform the world. There are good arguments for this. But good arguments also to take a more nuanced approach, including one that includes “mandatory guardrails” on AI, as was initially slated to do.

Those guardrails are necessary for all of us. But given we’re one week out from a worldwide social media ban that Australia is already patting itself on the back for achieving, these guardrails seem to be missing in the context of how AI is fundamentally transforming what it means to be a child and a teenager.

AI is “rewiring childhood,” says The Economist this week. Speaking with various experts and leaders, it notes how children can access an “upbringing previously available only to the rich, with private tutors, syllabuses and bespoke entertainment. They can play games tailored to them and “have an entourage of chatbot friends cheering them on.” Is it a childhood fit for a King or Queen, or one that leads to a lifetime of loneliness and a lack of skills and experience in engaging with the real – and oftentimes complex – humans of real life?

There are huge opportunities in AI, sure. We can and will solve things that only a short time ago seemed impossible to achieve.

But looking at it from the point of view of kids and teens, in the week when Australia is taking the world first step to ban social media for this cohort during all the damage inflicted as these platforms rolled out with little regulation and consideration for safety seems like an opportunity all over again to get it wrong, and try to fix it later – potentially for the next generation.

The consequences here I fear are far more lasting, severe and all encompassing than those posed by social media.

×

Stay Smart!

Get Women’s Agenda in your inbox