Real accountability is not a forced financial payout; it is acknowledgement and remorse for the harm caused. The recent rulings against Meta and Google have been framed as a win for accountability, but the moment they announced they would appeal, it became clear they are not accountable.
Last week, juries in New Mexico and Los Angeles found Meta and Google knowingly caused harm to minors on their platforms and ordered them to pay damages. For Meta, the total comes to just under USD $380 million, a fraction of its reported USD $200.1 billion revenue for 2025.
Without real accountability, the platforms themselves will not change to be safer for children. Which is the point of all this, is it not? Many parents, caregivers, and digital safety advocates are celebrating, but from my perspective working closely in this space, this does not translate into meaningful change in the daily digital lives of children.
I have worked with children and families for 25 years, including 12 years in child protection. I have seen firsthand the complexities of social media use and challenge the idea that these platforms are fundamentally harmful. Not all children experience digital spaces the same way. Some are more vulnerable, for example due to socio-economic background, pre-existing mental health challenges, or being part of a marginalised group. Others may never experience harm. Comparing these cases to the tobacco trials, which involve inherently harmful products, ignores both the broader contributing factors to harm and the benefits digital platforms can provide young people.
This brings me to the efficacy of blanket bans, such as the social media ban introduced in Australia in December 2025. It applied to 10 platforms, including Meta’s Facebook and Instagram, and Google’s YouTube, but ignored other platforms known to cause harm, such as Roblox and countless other messaging, gaming, social, and AI platforms used by children.
The ban was framed as holding Big Tech accountable, with fines of AUD $49.5 million if companies failed to take “reasonable steps” to prevent under-16s accessing their platforms. But again, is this real accountability? Are children actually safer?
Three months in, the results have been mixed. In the first 30 days, 4.7 million accounts were reportedly removed. But we do not know how many children have multiple accounts, found workarounds, or created new ones. Families and children report that under-16s are still on these platforms, and those removed often feel left out if friends remain online.
Bans do not remove teenagers’ fundamental need for exploration, belonging, and self-expression, the very reasons social media is so appealing. Many children have migrated to smaller, niche apps such as Yope and Lemon8, which are unregulated and may be less safe. The added concern is that children may not tell their parents if they experience harm on banned platforms for fear of repercussions.
Bans also do not teach children digital literacy or how to navigate online harms. When teenagers are allowed to access these platforms at 16, they may have little understanding of the risks, how to identify harmful content, or how to manage unsafe interactions. Without guidance and experience, they remain vulnerable.
Parents and caregivers are left enforcing rules, managing meltdowns, and navigating their children’s social media withdrawals. They must explain why platforms that provided connection, learning, creativity, and even business opportunities are suddenly off-limits.
Many parents lack guidance on managing the ban’s effects or what to do if children circumvent it. A US company being “held accountable” by paying a relatively small fine does little to alleviate the daily burden on families.
So, what is the solution? Platforms built to be safe by design, prioritising child wellbeing and online safety. This is not hypothetical; I have seen these platforms being developed in the space. They show that better systems are possible, where algorithms are not addictive, adults cannot contact children, and harmful content is removed, not amplified. At the same time, children can continue enjoying the benefits of online spaces, such as socialising, learning, creativity, and fun.
In 2018, Australia’s Office of the eSafety Commissioner began researching and consulting on the Safety by Design Initiative, which provides principles and guidelines for technology companies to embed safety into platform design.
Rather than relying on companies’ goodwill, legislation should require these standards to be met, preventing profit from being prioritised over child wellbeing. Platforms would need to meet minimum safety requirements before being accessible to under-16s, giving parents greater confidence that their children can engage safely online. This would also help alleviate the tension and conflict many families currently face around digital use.
I fear that the recent verdicts are symbolic accountability. Legal wins plus restrictive bans do not create structural change in these Big Tech companies, at least not anytime soon. Meanwhile, children, parents, and those of us working on the ground are left to manage the complexities of social media and digital platforms.
If we do not act holistically and intentionally, we risk losing another generation to systems designed for profit rather than protection. Real accountability is not headlines, fines, or bans; it is structural change embedded in the design of the platforms themselves.
Until that happens, it is children and families who pay the price.

