As Australia’s social media age restrictions come into effect from 10 December 2025, many parents and children under 16-years-old are asking what the changes will mean for them moving forward.
In an effort to protect the wellbeing of young Australians, the government will be requiring platforms like Instagram, TikTok, and YouTube to block or deactivate accounts belonging to users under 16, or face large fines.
The onus will be on the platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account.
While the government has said the list of platforms included in the ban will continue to be updated prior to 10 December, the current list includes nine platforms: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Kick and Reddit.
Under 16s who already have accounts on these platforms will not be allowed to keep them, with the government saying the platforms “will have to take reasonable steps to find and remove or deactivate” these accounts.
Once a young person turns sixteen years old, they’ll be allowed to rejoin these platforms.
Instead of removing accounts, some social media platforms may deactivate them so they can be reactivated with all their existing data when the user turns 16. However, the government says users should not rely on platforms to provide this option and recommends under-16s to download any data they want to save before 10 December in the event that they can’t reactivate their accounts.
How will platforms stop under-16s from making an account?
When it comes to age assurance, the government has not laid out specific approaches or technologies that each companies must use.
That means each service will need to determine its own strategy to prevent under-16s from faking their age by using false identity documents, AI tools or deepfakes, or even using VPNs to pretend to be outside Australia.
Guidance from the government does specify that companies must ensure they are “avoiding reliance on self-declaration alone”, meaning the platform can’t simply ask users their age. Companies also “cannot use government ID as the sole method” to determine whether a user is 16 or above.
Platforms can use other methods, such as assessing existing data to determine how long an account has been active, whether the account holder interacts with content targeted at children under 16, analysis of the language used by the account holder, activity patterns consistent with school schedules, connections with other users who appear to be under 16 and membership in youth-focused groups.
Visual checks, such as facial age analysis of the account holder’s photos and videos is another option for platforms, as well as audio analysis, such as age estimation of the account holder’s voice.
Location-based signals might also help a platform determine if a user is under 16 years-old. For example, if an account holder is trying to pretend to live outside of Australia by using a VPN, the platform might check IP addresses, GPS or location services, an Australian phone number listed, or photos and connections in Australia.
If evidence is spotted that a user is under 16 years-old, that will likely trigger the age assurance process, or review of an account from the social media platform.
eSafety has said it doesn’t expect a platform to make every account holder go through an age check process if it has other accurate data indicating the user is 16 or older.
How will this data be safely managed?
With the data that social media platforms are likely to gather from under-age users, concerns for privacy are an important consideration to ensure nothing is misused or stolen.
The government has stated that social media platforms are legally required to ensure any personal information they collect to check that a user is 16 or older is not used for other purposes without their consent, including marketing.
This is protected by the Social Media Minimum Age legislation, which builds on the existing privacy protections contained in the Privacy Act.
Analysing whether age assurance can successfully be done, the government recently conducted an Age Assurance Technology Trial, which confirmed a variety of methods provide effective age checks.
Last month, the Office of the Australian Information Commissioner has issued guidance on how social media platforms should handle personal information for age assurance.
Some of the recommendations included in the guidance are for companies to destroy any data that’s no longer needed for age assurance after it’s been collected and to undertake a privacy impact assessment when choosing a method for age assurance.
Who is at fault if under-16s manage to get on the platforms?
At a press conference on Wednesday, shadow communications minister Melissa McIntosh and eSafety commissioner Julie Inman Grant said the social media companies included on the list had been given ample notice that they would be included in the ban.
While Australians under 16-years-old won’t get into trouble if they’re still on an age-restricted platform after 10 December, the banned platforms could face fines of up to $49.5 million.
While the government will be watching social media platforms to ensure the ban is being upheld, the fine is meant to act as incentive for the platforms to uphold the law.
“If [the platforms] have not given thought to this up until today, that is nobody’s business but theirs. They’ve had twelve months’ notice,” Wells said.
Platforms not included on the current ban list are online gaming and standalone messaging apps such as Discord, GitHub, Google Classroom, LEGO Play, Messenger, Roblox, Steam and Steam Chat, WhatsApp, YouTube Kids.
Last week, Australian Federal Police commissioner Krissy Barrett warned that Roblox was among services being used by criminals to groom and abuse young women.
While Roblox is not currently included in the under 16s ban, Inman Grant confirmed on Wednesday that the platform was “on the line” of being added to the “dynamic list”.
“Delaying children’s access to social media accounts gives them valuable time to learn and grow, free of the powerful, unseen forces of harmful and deceptive design features such as opaque algorithms and endless scroll,” Inman Grant said.

