Australia’s internet regulator has criticised the world’s largest social media companies of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Uncovered in First Major Review
Australia’s eSafety Commissioner has outlined a troubling pattern of failure to comply amongst the world’s biggest social media platforms in her inaugural review following the ban took effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement adequate safeguards to stop minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, noting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings indicate a notable intensification in the regulatory response, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has emphasised that simply showing some children still maintain accounts is inadequate; platforms must instead provide concrete evidence that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the first place. This shift signals the government’s commitment to ensure tech giants accountable, with potential penalties looming for companies that fail to meet the statutory obligations.
- Allowing previously banned users to re-verify their age and restore account access
- Allowing repeated attempts at the same age assurance method with no repercussions
- Insufficient mechanisms to stop accounts for under-16s from being established
- Limited complaint mechanisms for parents and members of the public
- Absence of publicly available information about enforcement efforts and account removals
The Scope of the Issue
The considerable scale of social media activity amongst young Australians highlights the regulatory challenge facing both the authorities and the platforms themselves. With numerous accounts already restricted or removed since the implementation of the ban, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s conclusions suggest that the operational and technical barriers to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to distinguish genuine age declarations from false claims. This complexity has left enforcement authorities wrestling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the willingness of platforms to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the real challenge of confirming age online. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems required by law. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing significant penalties that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Figures Indicate
In the initial month following the ban’s implementation, Australian officials stated that 4.7 million accounts had been restricted or deleted. Whilst this statistic initially looked to show enforcement effectiveness, later review reveals a more nuanced picture. The considerable quantity of account deletions indicates that many under-16s had managed to establish accounts in the first place, revealing that preventative measures were lacking. Moreover, the data casts doubt about whether deleted profiles represent genuine enforcement or merely users deleting their accounts of their own accord in in light of the new restrictions.
The limited transparency surrounding these figures has frustrated independent observers attempting to evaluate the ban’s true effectiveness. Platforms have revealed little data about their enforcement methodologies, effectiveness metrics, or the characteristics of suspended accounts. This opacity makes it hard for regulators and the general public to evaluate whether the ban is functioning as designed or whether younger users are simply finding other methods to use social media. The Commissioner’s demand for detailed evidence of systematic compliance measures reflects mounting dissatisfaction with platforms’ reluctance to provide full information.
Industry Response and Opposition
The major tech platforms have responded to the regulator’s enforcement action with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which runs Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a major challenge across the industry. The company has advocated for a different approach, suggesting that strong age verification systems and parental consent requirements implemented at the application store level would be more efficient than platform-level enforcement. This stance demonstrates wider concerns across the industry that the existing regulatory system places an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, announcing that it had locked 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, industry observers dispute whether such figures demonstrate genuine compliance or merely reactive account management. The fundamental tension between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for implementation.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap states to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups highlight privacy issues and technical obstacles as impediments to effective age verification
- Platforms contend they are doing their best whilst challenging the ban’s general effectiveness
Wider Questions About the Prohibition’s Effectiveness
As Australia’s under-16 online platform ban moves into its enforcement phase, key concerns remain about whether the law will accomplish its stated objectives or merely drive young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that despite months of implementation, substantial gaps remain—children keep discovering ways to bypass age verification systems, and platforms have had difficulty prevent new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will genuinely abandon major social networks or simply migrate to alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s international ramifications increase the complexity of assessments of its success. Countries such as the United Kingdom, Canada, and several European nations are watching Australia’s experiment closely, considering similar legislation for their own populations. If the ban does not successfully reduce children’s digital engagement or does not protect them from damaging material, it could weaken the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage access, it may encourage other administrations to pursue similar approaches. The result will likely influence international regulatory direction for the foreseeable future, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Gains and Those Who Suffer
Mental health campaigners and organisations focused on child safety have endorsed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, accessing educational content, and engaging with online communities around shared interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s practical impact extends beyond individual users to influence content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that depend on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations have trouble connecting with young people through channels they previously employed effectively. Meanwhile, the ban unintentionally advantages large technology companies with resources to develop age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects go well past the simple goal of child protection.
What Lies Ahead for Enforcement
Australia’s eSafety Commissioner has indicated a significant shift from inactive oversight to proactive action, marking a pivotal moment in the execution of the youth access prohibition. The regulator will now compile information to ascertain whether platforms have failed to take “reasonable steps” to restrict child participation, a statutory benchmark that extends beyond simply recording that minors continue using these platforms. This approach demands demonstrable proof that platforms have introduced appropriate systems and procedures meant to keep out minors. The Commissioner’s office has stated it will launch probes systematically, developing arguments that could trigger significant fines for non-compliance. This shift from monitoring to enforcement reflects growing frustration with the services’ existing measures and suggests that voluntary cooperation on its own will not be enough.
The implementation stage presents significant concerns about the sufficiency of sanctions and the practical mechanisms for maintaining corporate responsibility. Australia’s statutory provisions offers enforcement instruments, but their success relies on the eSafety Commissioner’s readiness to undertake regulatory enforcement and the platforms’ capability to adjust meaningfully. Overseas authorities, particularly regulators in the UK and EU, will closely monitor Australia’s regulatory approach and consequences. A successful enforcement campaign could create a model for other nations contemplating similar bans, whilst shortcomings might compromise the comprehensive regulatory system. The coming months will be critical whether Australia’s pioneering regulatory approach translates into real safeguards for young people or stays primarily ceremonial in its impact.
