Thu. May 7th, 2026

Children’s Wellbeing and Schools Bill prompts ethical concerns

teens using smartphones social Xavier Lorenzo adobe


The Children’s Wellbeing and Schools Bill is a wide-ranging bill from the Department for Education. The bill is intended to improve child safety by legislating the expected standards regarding the duty of care for children.

However, recent amendments to the bill, which were proposed by the Department for Science, Innovation and Technology, have included two provisions that lead to ethical concerns regarding content moderation and the sharing of personal data.

The timing of these amendments is especially concerning, as they were included after the third reading in the House of Lords. The bill is currently at the consideration of amendments stage, prior to submission for royal assent (after which the bill would be enacted into UK law).

“We’ve been clear that we will take action to make sure children have a healthy relationship with mobile phones and social media. That’s why we’ve launched a consultation looking at everything from age limits and safer design features to a social media ban – seeking views from experts, parents and young people to ensure we take the best approach, based on the latest evidence,” said a spokesperson for the Department for Science, Innovation and Technology (DSIT).

“The amendments made to the Children’s Wellbeing and Schools Bill will allow us to act quickly on the outcomes of this consultation and help give young people the childhood they deserve.”

The government, including the House of Lords, have been in recess for Easter. As such, MPs and peers have not been given much time to scrutinise amendments that potentially have far-reaching consequences, beyond what was intended. However, the government has said that MPs and peers will have the chance to debate and vote on any measures brought forward.

“Amendments are being introduced into bills at very late stages in the House of Lords, and then you have this last-minute scramble and disagreement between the Lords and the Commons as to how to create legislation to deal with the issue that has arisen,” says James Baker, platform power and free expression programme manager for the Open Rights Group.

“They are such politically controversial hot topics that the government can’t ignore what’s going on in the Lords. It’s a terrible way to create legislation and bypasses huge amounts of Parliamentary scrutiny. That’s why we’ve ended up with a motion to ban all 16-year-olds from social media in a bill that was about improving standards in schools.”

Age restrictions for VPNs

The first of these amendments would compel virtual private network (VPN) providers to verify the age of all their users in the UK.

The bill states: “Within 12 months of the day on which this act is passed, the secretary of state must, for the purpose of furthering the protection and wellbeing of children, make regulations which prohibit the provision to UK children of a relevant VPN service (the ‘child VPN prohibition’).”

Although the bill is aimed at children (anyone under the age of 18), everyone would have to undertake these checks for the VPN providers to identify the ages of their users in a “highly effective” manner. Although age verification could be undertaken by the VPN providers themselves, more often it is conducted by a third-party.

Despite the recent proliferation of age verification service providers, except for the core data protection legislation (Data Protection Act 2018 and UK GDPR), the industry is still a largely unregulated space.

This is despite the sensitive information (biometric data, identifying documents, etc) that is shared with these providers. Yoti, the market leader in the age-verification industry, was fined 950,000 euros (nearly £830,000) by the Spanish data protection authority earlier this year for excessive retention and unlawful processing of personal data.

The VPN age restriction amendment could also be used to enable further regulations that would require people to use a particular method for proving their age online, such as a government issued Digital ID. The digital ID system is a protocol the current government has been pursuing, despite wide-ranging resistance.

Censoring the internet

The second amendment that has caused concern would require providers of internet services to restrict children’s access to certain internet services. The bill states that: “By regulations made by statutory instrument require all regulated user-to-user services to use highly-effective age assurance measures to prevent children under the age of 16 from becoming or being users.”

Like the VPN age restriction amendment, this also raises ethical challenges around verifying everyone’s age, but for ISPs. For the service providers to safely protect children, they would have to correctly identify the age of their users. This would mean relying on age verification technologies to ensure that the service providers are adequately following the regulatory requirements.

The amendment also gives ministers the power “to require internet service providers to restrict access by children to certain internet services”. Although content blocking is intended to protect children from encountering unsuitable material online, it carries significant risks of insidious purposes.

Furthermore, the amendment could allow the government to restrict internet content without passing new legislation or the need to demonstrate that harm to children was likely. The amendment would effectively enable this government, or any future government, to determine what it would consider as harmful and block certain content arbitrarily.

“They’ve got this idea that all platforms are bad and that we know what’s best, so we’re going to tell the platforms what to do,” says Baker. “Whereas a lot of the platforms have trust and safety teams with intelligent people who’ve been working on these problems for a long time and are now having design solutions imposed on them.

“A better approach to government micromanaging features might be to regulate the business models that create this conflict between platforms need to maximise attention for ad revenue, and user experience or safety.”

It is also worth noting that this will apply to all internet services, not just to websites, so any service provider that has an online component will be affected. This includes streaming services such as YouTube or Netflix, videogames that have online multiplayer capabilities, and messaging services.

Significant concerns are raised

There is a valid concern that this amendment for protecting against “harmful” content could be used to restrict content that a government may be ideologically opposed to. A potential example of this would be a government choosing to block access to information and support regarding LGBTQIA+ identities.

This could therefore potentially pave the way for a digital version of Section 28 (a former amendment in the Local Government Act that prohibited local authorities from promoting homosexuality or teaching that it was acceptable, which was repealed in 2003).

There is also the consideration that ISPs provide internet connectivity to households and not just a specific person. So, if the amendment to the bill were to come into effect as written, it would be necessary for ISPs to determine who was using the internet at any given time. This issue would be compounded if multiple people were using the same ISP connection to access different content.

The Online Safety Act 2023, which came fully into force last year, mandates that social media and internet search services protect children via age assurance against pornography, self-harm and bullying content.

“The Online Safety Act received some criticism because AI chatbots aren’t included in it, so by making the categories really broad, it gives them flexibility,” explains Baker. “That way, if something comes up and there is public outcry about it, they have the power to pass secondary legislation and essentially place it behind an age ban.”

It is also worth noting that blocking harmful content ignores the underlying causes of online harm by not addressing the engagement-driven design and data-fuelled advertising business models that can lead to toxic content to provoke reactions. Neither does the amendment support digital literacy initiatives to develop resilience among young people or promote safer online platform alternatives.

The UK government already has age restrictions in place for accessing pornographic websites, courtesy of the Online Safety Act 2023, and online video platforms require users to verify their age before watching mature content. Additionally, the UK already has an age rating policy, courtesy of the British Board of Film Classification (BBFC), which applies age rating to “films, videos and websites”.

The human rights group Liberty is already calling for a new amendment to be added to the Children’s Wellbeing and Schools Bill to address some of the concerns regarding the proportionality of incorporating an age verification checkpoint. 

The amendments to the Children’s Wellbeing and Schools Bill create significant ethical implications for the sharing and processing of personal data. By increasing the scope of their remit to incorporate children’s online experiences, the government would potentially be paving the way for unregulated age-verification, content moderation and internet monitoring.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *