As the UK’s Online Safety Act takes effect, major social media platforms—particularly X (formerly Twitter) and Reddit—have begun broadly restricting content, including materials that hold clear public interest, such as footage from the wars in Gaza and Ukraine. An extensive investigation by BBC Verify reveals that these companies, in their efforts to comply with the new law, may be excessively cautious, inadvertently limiting access to essential news, parliamentary debates, historical artwork, and discussions about ongoing global conflicts.
What the Online Safety Act Requires
The Online Safety Act, which officially became law last Friday, is designed to impose stricter content regulation responsibilities on digital platforms. Its main goal is to shield underage users from harmful online material, including:
-
Pornographic content
-
Material promoting or encouraging self-harm
-
Posts related to eating disorders
-
Content inciting or glorifying violence
Under this law, websites and social media services can be fined up to £18 million or 10% of their global revenue if they fail to adequately restrict harmful material. In extreme cases, access to the platform itself may be blocked in the United Kingdom.
Age Verification Now Used to Gatekeep Sensitive Content
To enforce these requirements, companies like X and Reddit have implemented age-verification systems that restrict certain posts and communities unless users confirm their age. This age gate is designed to prevent children from stumbling upon dangerous or inappropriate content.
However, BBC Verify found that many non-explicit and newsworthy posts are being blocked under the same mechanism. These restrictions are not limited to harmful content, but are also affecting posts of significant educational and civic value.
Examples of Restricted Public Interest Content
Among the restricted content are:
-
A video from Gaza showing a man navigating rubble to find the remains of his family. The post contains no visible bodies or graphic violence, yet access was denied to users without age verification.
-
A clip from Ukraine showing the destruction of a Shahed drone mid-flight. The drone was unmanned, and no individuals were harmed. Still, users without verified ages were blocked from viewing it.
These posts were met with automated warnings stating that the content was restricted due to local laws until the system could estimate the user’s age. In some cases, warnings were removed only after media inquiries.
Reddit Applies Restrictions to Entire Communities
Reddit, a platform known for its diverse discussion communities, has added age-based access restrictions to various subreddits. Many of these communities serve as vital forums for news sharing and civic discussion, but are now hidden behind login barriers or blocked via search engines for users who have not verified their age.
Communities impacted include:
-
r/UkraineConflict, with 48,000 members, which regularly shares verified footage and analysis of the war in Ukraine.
-
Multiple Israel-Gaza war discussion boards, covering breaking news and humanitarian updates.
-
Subreddits focused on public health, where discussions on sensitive topics such as mental health, reproductive care, or gender identity occur.
In each case, users must log in and confirm their age to gain access—regardless of whether the content within is inappropriate or graphic.
UK Parliamentary Debate Also Caught in Restrictions
The effects of the law are not limited to war footage or health-related content. Even political debates in the UK Parliament have been flagged and restricted.
A speech by Katie Lam, a Conservative MP elected in 2024, was restricted on X. The speech addressed a serious criminal matter involving the abuse of minors by grooming gangs—a topic of considerable public concern. Although the speech is fully accessible on the UK Parliament’s official streaming site, ParliamentLive.tv, it is gated on X behind age verification.
This example highlights how even official political proceedings can be unintentionally caught up in moderation systems designed to protect minors.
Art and Culture Also Impacted
The overreach of the restrictions also extended into the art and culture domain. One notable case involved an image of Francisco de Goya’s painting “Saturn Devouring His Son.” The artwork, a 19th-century masterpiece depicting a scene from Greek mythology, has been publicly displayed in museums and widely taught in art history. Nevertheless, it was restricted due to its dark and potentially disturbing theme, even though it is a critical work of cultural commentary.
Experts Raise Alarm About Overblocking and Censorship Risks
Legal and technology experts have expressed concern that companies are applying the law too broadly out of fear of noncompliance. By doing so, they risk undermining public discourse, freedom of expression, and the right to access information.
There is growing apprehension that social media platforms, in the absence of clear moderation guidelines, are erring on the side of over-caution. The concern is not only about blocking genuinely harmful content, but about sweeping up informative, educational, and civic materials that should remain accessible to adult audiences.
Platforms May Be Intentionally Overblocking to Avoid Penalties
It is possible that companies are overblocking as a legal defense, effectively placing themselves in a safer position under the law. According to BBC Verify, both X and Reddit did not respond to specific questions about their content moderation policies or how they plan to refine them.
Some digital rights experts believe that platforms are still in the early stages of implementing the law and may adjust their enforcement over time as moderation tools and content classification improve.
However, enforcement bodies like Ofcom, the UK’s media regulator, have emphasized that platforms must not only protect children but also avoid infringing on free speech. The law calls for a risk-based approach, and platforms that misapply it to restrict political debate or journalism may also face scrutiny.
Elon Musk’s Public Opposition to the Law
Elon Musk, the billionaire owner of X, has been vocally opposed to the Online Safety Act. He has frequently criticized the law on social media, framing it as a threat to democratic values and open communication. Musk claims the law’s true intent is to suppress dissent and discourage companies from operating freely in the UK market.
He has amplified posts from far-right figures who also oppose the legislation and has positioned X as a platform standing against what he perceives as government overreach.
Logged-Out Users Are Treated Like Minors
One of the law’s unintended consequences is that a large number of UK adults are being treated as children online. Based on data provided by the platforms:
-
Around 37% of users on X access the platform while logged out
-
On Reddit, the percentage is even higher, with 59% of users browsing without accounts
Because these users have not verified their age, the platforms classify them under the same rules that apply to minors, thus denying them access to a wide range of content—even when the information is harmless or informative.
Meta’s Enforcement Model Is Less Transparent
Unlike X and Reddit, Meta’s platforms (Facebook and Instagram) follow a different model. Meta assigns “teen” account types that come with built-in parental controls and stricter content access limitations. However, because Meta’s moderation and classification system is more opaque, it is harder for outside observers to measure which content is being restricted under the new UK law.
This lack of visibility makes it difficult to evaluate whether Meta is overblocking content, under-enforcing the law, or striking a balance between safety and access.
Law Has Reduced Some Harmful Content, But Moderation Lacks Nuance
Despite these concerns, early evidence suggests that the Online Safety Act has successfully reduced exposure to genuinely harmful material. Since Musk’s acquisition of X, the platform had been criticized for its lax enforcement, with an uptick in pornographic content, violent posts, and hate speech. However, BBC Verify found that when using an unverified account, much of this material has been significantly restricted under the new law.
Still, experts caution that moderating content at this scale and complexity requires skilled and well-staffed teams. The challenge is that many major platforms, including X and Meta, have recently downsized or eliminated their trust and safety teams, leaving them under-equipped to make nuanced decisions.
This presents a significant risk: automated systems may not differentiate between a harmful video and a graphic yet necessary piece of journalism. Without trained moderators and detailed guidance, platforms are left to apply vague rules using blunt tools—leading to the current overblocking crisis.
A Complex Balancing Act Between Safety and Freedom
The Online Safety Act marks a turning point in the UK’s approach to internet regulation. Its intentions—to protect children from online harm—are noble and necessary. But the unintended consequence has been the suppression of content that informs, educates, and empowers the public.
While the law is still new, and platforms are adapting, the evidence so far indicates that the balance between safety and free expression is not yet right. Regulators, lawmakers, and tech companies alike must now engage in dialogue to ensure that the law’s enforcement protects both young users and the democratic values of open debate, civic transparency, and access to information.







