By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: UK online safety regime ineffective on misinformation, MPs say | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > UK online safety regime ineffective on misinformation, MPs say | Computer Weekly
News

UK online safety regime ineffective on misinformation, MPs say | Computer Weekly

News Room
Last updated: 2025/07/13 at 8:40 AM
News Room Published 13 July 2025
Share
SHARE

The UK’s Online Safety Act (OSA) is failing to address “algorithmically accelerated misinformation” on social media platforms, leaving the public vulnerable to a repeat of the 2024 Southport riots, MPs have warned.

Following an inquiry into online misinformation and harmful algorithms, the Commons Science, Innovation and Technology Committee (SITC) has identified “major holes” in the UK’s online safety regime when it comes to dealing with the viral spread of false or harmful content.

Highlighting the July 2024 Southport riots as an example of how “online activity can contribute to real-world violence”, the SITC warned in a report published on 11 July 2025 that while many parts of the OSA were not fully in force at the time of the unrest, “we found little evidence that they would have made a difference if they were”.

It said this was due to a mixture of factors, including weak misinformation-related measures in the act itself, as well as the business models and opaque recommendation algorithms of social media firms.

“It’s clear that the Online Safety Act just isn’t up to scratch,” said SITC chair Chi Onwurah. “The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn’t cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable. To create a stronger online safety regime, we urge the government to adopt five principles as the foundation of future regulation.”

These principles include public safety, free and safe expression, responsibility (including for both end users and the platforms themselves), control of personal data, and transparency.  

The SITC also made specific recommendations, such as creating “clear and enforceable standards” for the digital advertising ecosystem that incentivises the amplification of false information, and introducing new duties for platforms to assess and deal with misinformation-related risks. “In order to tackle amplified disinformation … the government and Ofcom should collaborate with platforms to identify and track disinformation actors, and the techniques and behaviours they use to spread adversarial and deceptive narratives online,” said MPs.

Business models and opaque algorithms

According to the SITC, social media companies have “often enabled or even encouraged” the viral spread of misinformation – and may have profited from it – as a result of their advertising and engagement-based business models.

“The advertisement-based business models of most social media companies mean that they promote engaging content, often regardless of its safety or authenticity,” MPs wrote. “This spills out across the entire internet, via the opaque, under-regulated digital advertising market, incentivising the creation of content that will perform well on social media.”

They added that while major tech companies told the committee there are no incentives to allow harmful content on their platforms, as it can damage the brand and repel advertisers, “policymaking in this space has lacked a full evidence base” because the inner workings of social media recommendation algorithms are not disclosed by the firms.

“We asked several tech companies to provide high-level representations of their recommendation algorithms to the committee, but they did not,” they said, adding that this “shortfall in transparency” makes it difficult to establish clear causal links between specific recommendations and harms.

“The technology used by social media companies should be transparent, explainable and accessible to public authorities,” they said.

The SITC added that the government should create measures to compel social media platforms to embed tools in their systems that can identify and algorithmically deprioritise fact-checked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.

“It is vital that these measures do not censor legal free expression, but apply justified and proportionate restrictions to the spread of information to protect national security, public safety or health, or prevent disorder or crime,” said MPs.

On tackling the underlying business models that incentivise misinformation, MPs said there is a regulatory gap around digital advertising, as the focus is currently on harmful advertising content rather than “the monetisation of harmful content through advertising”.

“The government should create a new arms-length body – not funded by industry – to regulate and scrutinise the process of digital advertising, covering the complex and opaque automated supply chain that allows for the monetisation of harmful and misleading content,” they added. “Or, at the least, the government should extend Ofcom’s powers to explicitly cover this form of harm, and regulate based on the principle of preventing the spread of harmful or misleading content through any digital means, rather than limiting itself to specific technologies or sectors.”

While generative artificial intelligence (GenAI) only played a marginal role in the spread of misinformation before the Southport riots, the SITC expressed concern about the role it could play in a “future, similar crisis”.

They said GenAI’s “low cost, wide availability and rapid advances means that large volumes of convincing deceptive content can increasingly be created at scale”.

It said the government should therefore pass legislation that covers GenAI platforms, in line with other online services that pose a high risk of producing or spreading illegal or harmful content.

“This legislation should require generative AI platforms to: provide risk assessments to Ofcom on the risks associated with different prompts and outputs, including how far they can create or spread illegal, harmful or misleading content; explain to Ofcom how the model curates content, responds to sensitive topics and what guardrails are in place to prevent content that is illegal or harmful to children; implement user safeguards such as feedback, complaints and output flagging; and prevent children from accessing inappropriate or harmful outputs.”

They added that all AI-generated content should be automatically labelled as such “with metadata and visible watermarks that cannot be removed”.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Owner of health-inspired taco restaurant gives bizarre reason for closure
Next Article Foldables are in and suddenly really thin
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

In This LA Neighborhood, Residents Are Turning to Doorbell Cameras Instead of Calling the Cops | HackerNoon
Computing
These are the pros and cons of buying from Apple’s refurbished store: Is it worth it? – 9to5Mac
News
Our expert’s favorite robot vacuum of 2025 is still at its lowest-ever price on Prime Day
News
JD to invest $138 million into supporting short video content · TechNode
Computing

You Might also Like

News

These are the pros and cons of buying from Apple’s refurbished store: Is it worth it? – 9to5Mac

5 Min Read
News

Our expert’s favorite robot vacuum of 2025 is still at its lowest-ever price on Prime Day

2 Min Read
News

‘Active aggressor’ at Kentucky Expo where thousands gathered for sports game

3 Min Read
News

Ring doorbell owners warned gadget could SHUT DOWN as temperatures reach 31C

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?