The Online Safety Act, a piece of legislation brought in to rein in dangerous content shared digitally, has been criticised for its inability to target misinformation.
In a report from the Science, Innovation and Technology Committee within parliament, the group said the act “fails to keep UK citizens safe from a core and pervasive online harm”.
Specifically, the MPs questioned the potential efficacy of Online Safety Act in during last year’s Southport riots, a time during which dangerous misinformation was spreading fast on social media.
As reported by UKTN, Ofcom, the regulatory body responsible for enforcing the act, was unable to punish the likes of X and Meta over misinformation associated with the riots because the act had not been fully implemented yet.
However, the committee questioned whether Ofcom would have been able to do at the time if the act had come into full force.
“When we questioned representatives of Meta, TikTok, and X, they were unable to say if or how the act would have changed their response to the unrest,” the report said.
“Ofcom confirmed that the act is not designed to tackle the spread of “legal but harmful” content such as misinformation but said that, if it had been in place, platforms would have had to answer “a number of questions” about risk assessments and crisis response mechanisms.”
According to the report, Baroness Jones, the minister responsible for online safety argued to the committee that the act would have made a “real and material difference”, as it would have allowed Ofcom to insist that illegal posts be taken down
These arguments did little to convince the committee.
“Even if it had been fully implemented, it would have made little difference to the spread of misleading content that drove violence and hate in summer 2024,” the report said.
Commenting on the government’s ability to regulate social media content, Jake Moore, global cybersecurity advisor at ESET, said: “Social media platforms are incentivised to amplify engaging content which is often misleading or harmful and there’s still little transparency around how their algorithms work, limiting the government’s and regulators’ ability to intervene effectively.
“Without transparency or audits on their algorithms, it’s very difficult to identify or reduce online harms.”
Register for Free
Bookmark your favorite posts, get daily updates, and enjoy an ad-reduced experience.
Already have an account? Log in