The launch of the new GPT-5 model from OpenAI “reinforces the growing gap between AI capabilities, and our ability to govern them”, warns the Ada Lovelace Institute.
The latest iteration of OpenAI’s flagship product ChatGPT launched this week, with the company claiming it had now reached “PhD-level” intelligence.
As a major advancement in AI capabilities, the launch has sparked concern among the UK-based research organisation, which has reiterated its “urgent” calls for regulation.
According to the group, the power and effectiveness of AI technology has increased rapidly with GPT-5, whilst large questions about safety, security, legality and the impact on human jobs go unanswered.
While the government has been delaying any firm AI legislation in hopes of encouraging growth and avoiding the harsh backlash seen in the European Union following its own AI Act, research from the organisation suggests public opinion is in favour of regulation.
The group found in its research that 72% of the UK public say laws and regulation would increase their comfort with AI and 87% say it is important that governments or regulators have the power to stop the release of harmful AI systems.
“Almost three years since the Bletchley AI Summit, the only actors making the decisions regarding whether such systems are safe enough to release are the companies themselves,” the institute said.
“Currently, neither government nor regulators have meaningful powers to compel companies to provide transparency, to report incidents, to undertake safety testing or to remove models from the market if they prove unsafe.”
Register for Free
Bookmark your favorite posts, get daily updates, and enjoy an ad-reduced experience.
Already have an account? Log in