The UK government is considering whether to ban under-16s from accessing social media, alongside a series of other measures intended to protect young people’s well-being.
This includes proposals to restrict addictive design features such as infinite scrolling and limit children’s access to virtual private networks (VPNs), as well as preserving digital evidence following a child’s death.
The consultation will also address how tech companies can restrict devices so that suspected child sexual abuse material (CSAM) cannot be sent or received in the first place.
While the measures are being considered as part of a formal government consultation on children’s digital well-being, UK prime minister Keir Starmer has already announced that the government will change the Online Safety Act (OSA) to ensure artificial intelligence (AI) chatbot developers are required to protect users, after Elon Musk’s Grok platform was used to generate millions of non-consensual intimate images.
In the same announcement, Starmer also promised to introduce new legal powers so that rapid action on online safety can be taken off the back of the consultation process.
“The government is committed to following the evidence, and these powers will mean we can act fast on [the consultation] findings within months, rather than waiting years for new primary legislation every time technology evolves,” said a government press release.
While the UK’s OSA – which was initially proposed in April 2019 but only went into full effect in March 2025 – is one of the world’s strictest online safety regimes, campaigners have long warned the laws are insufficient, enabling tech firms to continue operating with impunity.
In January 2026, X’s Grok AI tool came under fire for mass-producing nudified content of women and girls without their consent. According to researchers, Grok AI generated about three million sexualised images in less than two weeks, including 23,000 that appear to depict children.
However, online harms regulator Ofcom revealed it was not investigating the app because chatbot activities that interact with only one individual are not within the scope of the Online Safety Act, a gap technology minister Liz Kendall says will be closed under the new powers.
While closing the loophole around AI chatbots has largely been welcomed, other proposals set to be considered in the consultation are less popular.
For example, in a joint statement from 42 UK child protection charities – including the National Society for the Prevention of Cruelty to Children, the Molly Rose Foundation and 5Rights Foundation – they cautioned that, though well-intentioned, a blanket social media ban will fail to deliver the improvement in children’s safety and well-being that is urgently needed.
“They are a blunt response that fails to address the successive shortcomings of tech companies and governments to act decisively and sooner,” they said.
The Australian example
In December 2025, Australia became the first nation in the world to institute a social media ban for under-16s. Spain and France have promised similar bans in the next year, while Portugal has approved a bill promising explicit parental consent for children aged 13 to 16 to access social media.
The Australian experiment is the strictest and has focused on select platforms, including Facebook, Instagram, TikTok and Snapchat, which face fines of up to A$49.5m (£25m) if they fail to comply.
The companies are required to find ways to close existing accounts for under-16s and prevent new ones from being created. As of late January, the Australian government said social media companies have already revoked access to roughly 4.7 million accounts identified as belonging to children, and added that all 10 tech companies have been compliant.
Snapchat CEO Evan Spiegel warned this leaves thousands of other apps unregulated, noting that the government’s own trial found that age estimation technology was highly imperfect and often off by two to three years, particularly when applied to younger users.
He is advocating for age verification by app stores rather than by individual apps to minimise gaps in coverage and ensure uniform implementation of policy. Spiegel said this would reduce privacy risks by limiting how often personal information must be shared.
However, the Australian ban has already faced opposition, with more than 140 national and international academics signing an open letter to the prime minister opposing it. Two 15-year-olds have also sued the Australian government over the ban, arguing it will deny young Australians a right to freedom of political communication.
Ian Russell, chair of the Molly Rose Foundation, a UK-based suicide prevention charity for young people, cautioned against outright bans. “They risk unintended consequences that could leave children at greater risk of harm by treating the symptoms, not the problem,” he said.
“They let social media platforms off the hook by weakening the requirement for them to offer safe and high-quality experiences as a precondition for operating in the UK.”
Kerry Smith, CEO of the Internet Watch Foundation, added that the ban would be insufficient in tackling the production and distribution of child sexual abuse material.
“We need to see companies adopt wholesale the principle of safety-by-design, which embeds safeguards into the development of new technologies such as AI tools,” she said.
Chris Sherwood, CEO of the National Society for the Prevention of Children, said: “Much of what is being proposed mirrors what we have been pressing for: proper age-limit enforcement, an end to addictive design, and stronger action from platforms, devices and AI tools to stop harmful content at the source.”
However, he noted that “delivered swiftly, these measures would offer far better protection than a blanket ban”.
Sherwood also highlighted that social media is not a luxury for many children: it’s a lifeline, a source of identity, community and vital support. He fears a ban would take those spaces away overnight, potentially driving teenagers into darker, unregulated corners of the internet.
“We also strongly support putting children’s voices at the centre of this debate,” said Sherwood. “They understand both the benefits and risks of being online, and – after their insights have been overlooked in discussions so far, their experiences must now help guide the decisions made in the months ahead.”
Russell, who has campaigned for better online protections for children since his 14-year-old daughter died of suicide in 2017, said banning social media would be wrong, and that bereaved families are “horrified” at the way politicians have capitalised on the issue.
The UK government is also set to introduce Jools’ Law, which will require social media companies to automatically preserve children’s data when they die.
Age verification on VPNs
There is concern that, as with the OSA, the most significant challenge for carrying out the social media ban would be enforcement.
While the OSA’s age verification measures went live in late July 2025, requiring platforms to verify users’ ages to access certain content or sites, the widely accessible and simple-to-use nature of VPNs – which enable users to essentially mask their IP addresses and locations – means they can be used to easily circumvent blocks on particular websites or content.
Jamie Hurthworth, a dispute resolution lawyer and Online Safety Act expert, told Computer Weekly: “Until the government addresses VPN regulation head-on, certain offences in the Online Safety Act can be sidestepped with relative ease.”
He also noted that VPNs can obscure where online content originates from.
“Accountability must extend across the entire digital ecosystem – from platforms that design and deploy these systems, to regulators tasked with enforcement, and to intermediaries that enable anonymity – or we risk creating a framework that looks robust on paper but proves porous in practice, leaving children exposed and enforcement powerless,” said Hurthworth.
Although the consultation is set to consider age restrictions on VPNs, limiting children’s ability to sidestep the current age verification processes in place for many sites, the proposal has raised a number of concerns.
Maya Thomas, a legal and policy officer at UK-based civil liberties group Big Brother Watch, said restricting VPN use for children “represents a draconian crackdown on the civil liberties of children and adults alike”.
Thomas added that it would lead to VPNs introducing age checks for all users, not just children. With millions relying on VPNs for security and personal privacy, an age verification check “utterly defeats the point” of using privacy tools in the first place.
“For victims of domestic abuse or control, VPNs are one of the best ways to safely access online help, and countless educational institutions and workplaces use VPNs for remote working and file-sharing,” she said, adding that the ability to receive and share information, without state snooping, is a vital part of living in a free democracy.
Gytis Malinauskas, head of legal at VPN provider Surfshark, noted that they already prohibit users under 18 from using their services, and that the requirement for a paid subscription with a valid payment method serves as an additional safeguard against underage use.
Structural power of big tech
Aside from these concerns, digital rights groups have warned that the age verification process will give already-unregulated big tech companies access to large swathes of sensitive data.
James Baker at the Open Rights Group (ORG) said: “The government is playing whack-a-mole with online safety, focusing on individual harms and product features instead of confronting the structural power of dominant tech companies.
“Going through an age verification process often involves sending irreversible biometric identifiers into global commercial data ecosystems,” he said. “There are already examples of platforms using the additional data gained from these processes to target people with harmful online advertising.
“People are worried and angry about having to hand over biometric scans of their face to overseas companies just to access online platforms,” added Baker. “This is a particular concern when children’s faces are being scanned, too.”
In October 2025, Discord revealed that up to 70,000 users may have had their government-issued IDs exposed in a third-party data breach.
The ORG have also highlighted that there is currently no registrar of approved providers, no requirement for providers to meet any privacy or security standards and no requirement for platforms to choose trusted or certified providers. They compared this with other high-risk industries, such as card payments, where robust and compulsory standards have been developed.
The OSA currently places no limits on age verification providers from distributing, profiling or monetising the personal data of UK residents going through verification. However, Ofcom may refer providers to the data regulator if an age verification company is believed not to have complied with data protection law.
Jim Killock, the ORG’s executive director, noted that many big tech companies have chosen providers that are based outside the UK, leaving users exposed to unacceptable risks such as phishing, impersonation and reuse of data.
He also noted that the OSA applies to a wide range of services from social media sites such as Bluesky and Reddit, as well as Grindr, a location-based social networking and online dating application for gay, bisexual, queer and transgender people.
For example, Roblox, Reddit and Discord users have to submit facial scans to the age verification provider Persona, a company that Peter Thiel, co-founder of surveillance and data analytics company Palantir, has heavily invested in.
Palantir is controversial because of its track record of working with Immigration and Customs Enforcement, the Israeli military and the US Department of Defence. In May 2025, former employees signed an open letter accusing the company of authoritarianism for its large contracts with US president Donald Trump, enabling deportations through its programs, which track migrants in real time.
Andrew Breeze, director for online safety technology policy at Ofcom, told MPs in a Joint Committee Hearing on Human Rights on 4 February that age assurance represented a trade-off for regulators between child protection and ensuring a high degree of online privacy.
He also stressed that on the internet, “there is no impregnable defence that you can create against a determined person, adult or child”.
Politically, there have also been concerns that the OSA is censoring political content in the name of protecting children, with reports of Palestine-related content being placed behind age verification walls on X and Reddit.
