The UK’s internet regulator, Ofcom, is at risk of losing public trust if it fails to use its powers to tackle online harms, the technology secretary, Liz Kendall, has said.
Kendall last week told Ofcom’s chief executive, Melanie Dawes, she was deeply disappointed at the pace of the regulator’s enforcement of parts of the Online Safety Act, which is intended to protect the public from harms caused by a wide range of online platforms, from social media to pornography websites.
Ofcom has insisted the delays were beyond its control and that “change is happening”. But Kendall told the Guardian: “They know that if they don’t implement [and] use the powers that they’ve got in the act, they will lose the trust of the public.”
Last week, the father of Molly Russell, who took her own life at 14 after viewing harmful online content, said he had lost trust in the watchdog’s leadership.
Kendall did not give her backing when asked about her own trust in the regulator’s leadership on Thursday.
The tech secretary was speaking amid concerns that parts of the online safety regime are not expected to come into effect until the middle of 2027 – nearly four years since the Online Safety Act became law – and that, in the meantime, the speed at which the technological frontier is advancing risks outpacing government guardrails.
Kendall said she was now “really worried about AI chatbots” and “the impact they’re having on children and young people”.
The dangers have been highlighted by US lawsuits involving teenagers who took their own lives after becoming highly engaged with ChatGPT and Character.AI chatbots, which they treated as confidants and advisers.
“If chatbots aren’t included or properly covered by the legislation, and we’re really working through that now, then they will have to be,” Kendall said. “People have got to feel their kids are safe.”
Ofcom’s chair, Michael Grade, is due to step down in April, prompting a search for a new leader. Dawes, a career civil servant, has been in post as chief executive for nearly six years. Ofcom declined to comment.
On Thursday, the watchdog fined a “nudify” app £50,000 for failing to protect children from accessing pornography. Nudify apps typically use AI to “undress” uploaded photos.
Kendall said Ofcom was “rightly pressing forward”. It was the second fine issued by the regulator under the act since it became law more than two years ago.
Kendall was speaking in Cardiff at the launch of a new AI “growth zone”, which the government hopes will attract £10bn in investment and create 5,000 jobs on various sites stretching from the Ford Bridgend engine plant to Newport.
The government said Microsoft was one of the companies “joining forces with the government”, but Microsoft said it was not making any new investment commitments.
Ministers are also hoping to use £100m to back British startups, particularly in designing the chips that power AI, where the government believes the UK has a competitive advantage. But it may be tough to compete with the US chip manufacturer Nvidia, which this week reported it is making nearly $22bn a month.
On Wednesday, a Labour MP alleged Microsoft has been “ripping off” the UK taxpayer. The US tech company made at least £1.9bn from government deals in the financial year 2024-25.
Asked if she agreed, Kendall praised Microsoft’s AI technology, as used in her own constituency to create school lesson plans, but said: “We’ve got to do more to make sure that we have got the right people in the room who know about those companies and can negotiate the best possible deal. I would also like to see more homegrown companies, particularly in AI.”
A spokesperson for Microsoft said the NHS buys its services through a national pricing framework negotiated by the UK government, “ensuring both transparency and value for money” and that its partnerships provide “measurable benefits”.
“The UK government chooses to spend its technology budget across a variety of suppliers and Microsoft is privileged to be one of them,” they said.
