Advertisment

Australia to Enforce World-First Social Media Ban for Under-16s Starting Dec 10

Australia will prohibit children under 16 from using major social media platforms from next Wednesday, December 10, in a controversial move aimed at shielding young people from online harm.

 Representative image
Representative image

In a global first, Australia will prohibit children under 16 from using major social media platforms from next Wednesday, December 10, in a controversial move aimed at shielding young people from online harm.

Advertisment

The landmark legislation, passed last week with bipartisan support, will block minors from creating new accounts on platforms such as TikTok, Instagram, Facebook, X (formerly Twitter), Snapchat, YouTube, Threads, Reddit, Twitch, and Kick.

Existing underage accounts must be deactivated.

Prime Minister Anthony Albanese’s government says the ban is necessary to combat addictive algorithms and exposure to dangerous content. A 2025 government-commissioned study revealed that 96% of Australian children aged 10–15 are active on social media, with seven in ten encountering violent, misogynistic, or body-image-related harmful material, and more than half reporting cyberbullying. One in seven said they had experienced grooming-like behaviour from adults.

Which platforms are affected?

The law targets services whose primary function is social interaction and content sharing between users. Messaging apps like WhatsApp, educational tools such as Google Classroom, and the child-specific YouTube Kids app are exempt. Minors will still be able to watch videos or access content that does not require an account.

Online gaming platforms, including Roblox and Discord, as well as dating sites and AI chatbots, are notably outside the ban’s scope — a decision that has drawn sharp criticism.

How will it be enforced?

Responsibility falls entirely on the tech companies, not families. Platforms must deploy “reasonable” age-verification measures — ranging from government-issued ID and biometric facial scans to behavioural “age inference” tools. Self-declaration or parental consent alone will not be accepted.

Companies face penalties of up to A$49.5 million (US$32 million) for systemic failures. Meta has already begun suspending accounts of users it believes are under 16, while Snapchat is rolling out ID and selfie verification.

Will the ban actually work?

Experts and critics are deeply sceptical. Facial-recognition technology has proven least accurate for teenagers, potentially locking out legitimate adult users while sophisticated minors slip through. Many Australian teens have told media they plan to use fake birth dates, joint parent–child accounts, or VPNs to mask their location — tactics that surged in the UK after similar age-verification rules were introduced for adult sites.

Former Facebook Australia boss Stephen Scheeler pointed out that Meta earns the maximum daily fine amount (A$50 million) in under two hours, raising doubts about the deterrent effect.

Opponents argue that education, stronger privacy laws, and targeting harmful content directly would better protect children without a blanket ban that excludes gaming platforms and emerging AI services recently linked to child-safety scandals.

Communications Minister Annika Wells acknowledged the rollout will be messy: “This is big reform. It’s going to be untidy at the start, but we believe the risk of doing nothing is far greater.”

Governments in the United States, the United Kingdom, and the European Union are closely monitoring the Australian experiment as they consider their own restrictions on youth social media use.

Also Read: IndiGo Crisis Pushes Domestic Airfares to Absurd Levels: Guwahati–Kolkata Now ₹23,099

Advertisment
Advertisment