By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Second ever international AI safety report published | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Second ever international AI safety report published | Computer Weekly
News

Second ever international AI safety report published | Computer Weekly

News Room
Last updated: 2026/02/10 at 2:55 PM
News Room Published 10 February 2026
Share
Second ever international AI safety report published | Computer Weekly
SHARE

The overall trajectory of general-purpose artificial intelligence (AI) systems remains “deeply uncertain”, even as the technology’s proliferation is generating new empirical evidence about its impacts, the second International AI safety report has found.

Published on 3 February 2026, the report covers a wide range of threats posed by AI systems – from its impact on jobs, human autonomy and the environment to the potential for malfunctions or malicious use – that will be used to inform diplomatic discussions at the upcoming India AI Impact Summit.

Building on the previous report, released in January 2025, which was commissioned following the inaugural AI Safety Summit, hosted by the UK government at Bletchley Park in November 2023, the latest report similarly highlights a high degree of uncertainty around how AI systems will develop, and the kinds of mitigations that would be effective against a range of challenges.

“How and why general-purpose AI models acquire new capabilities and behave in certain ways is often difficult to predict, even for developers. An ‘evaluation gap’ means that benchmark results alone cannot reliably predict real-world utility or risk,” it says, adding that the systemic data on the prevalence and severity of AI-related harms remains limited for the vast majority of risks.

“Whether current safeguards will be sufficiently effective for more capable systems is unclear,” it adds. “Together, these gaps define the limits of what any current assessment can confidently claim.”

It further notes that while general-purpose AI capabilities have improved in the past year through “inference-time scaling” (a technique that allows models to use more computing power to generate intermediate steps before giving a final answer), the overall picture remains “jagged”, with leading systems excelling at some difficult tasks while failing at simpler ones.

On AI’s further development to 2030, the authors say plausible scenarios vary dramatically.

“Progress could plateau near current capability levels, slow, remain steady, or accelerate dramatically in ways that are difficult to anticipate,” it says, adding that while “unprecedented” investment commitments suggest major AI developers expect continued capability gains, unforeseen technical limits – including energy constraints, high-quality data scarcity and bottlenecks in chip production – could slow progress.

“The social impact of a given level of AI capabilities also depends on how and where systems are deployed, how they are used, and how different actors respond,” it says. “This uncertainty reflects the difficulty of forecasting a technology whose impacts depend on unpredictable technical breakthroughs, shifting economic conditions and varied institutional responses.”

Systemic impacts

Regarding the systemic impact on labour markets, the report notes that there is disagreement on the magnitude of future impacts, with some expecting job losses to be offset by new job creation, and others arguing that widespread adoption would significantly reduce both employment and wages.

It adds that while it is too soon for a definitive assessment of the impacts, early evidence suggests junior positions in fields like writing and translation are most at risk.

Relatedly, it says that there were also risks presented by systems of human autonomy, in the sense that reliance on AI tools can weaken critical thinking skills and memory, while also encouraging automation bias.

“This relates to a broader trend of ‘cognitive offloading’ – the act of delegating cognitive tasks to external systems or people, reducing one’s own cognitive engagement and therefore ability to act with autonomy,” it says. “Cognitive offloading can free up cognitive resources and improve efficiency, but research also indicates potential long-term effects on the development and maintenance of cognitive skills. 

As an example, the report notes one study that found a clinician’s ability to detect tumours without AI assistance had dropped by 6%, just three months after the introduction of AI support.

On the implications for income and wealth inequality, it says general-purpose systems could widen the disparities both within and between countries.

“AI adoption may shift earnings from labour to capital owners, such as shareholders of firms that develop or use AI,” it says. “Globally, high-income countries with skilled workforces and strong digital infrastructure are likely to capture AI’s benefits faster than low-income economies.

“One study estimates that AI’s impact on economic growth in advanced economies could be more than twice that of in low-income countries. AI could also reduce incentives to offshore labour-intensive services by making domestic automation more cost-effective, potentially limiting traditional development paths.”

The prediction that AI is likely to exacerbate inequality by reducing the share of all income that goes to workers relative to capital owners is in line with a January 2024 assessment of AI’s impacts on inequality by the International Monetary Fund (IMF), which found the technology will “likely worsen overall inequality” if policymakers do not proactively work to prevent it from stoking social tensions.

JPMorgan boss Jamie Dimon expressed similar concerns at the 2026 World Economic Forum, warning that the rapid roll-out of AI throughout society will cause “civil unrest” unless governments and companies work together to mitigate its effect on job markets.

Malfunction and loss control issues

On AI’s scope for malicious use – which covers threats such as cyber attacks, its potential for “influence and manipulation”, and the impacts of AI-generated content – the report says it “remains difficult to assess” due to a lack of systemic data on their prevalence and severity, despite harms profiteering.

For malfunction risks, which includes challenges around the reliability of AI and loss of human control over it, the report adds that agentic systems that can act autonomously are making it harder for humans to intervene before failures occur, and could allow “dangerous capabilities” to go undetected before deployment.

However, it says that while AI systems are not yet capable of creating loss of control scenarios, there is currently not enough evidence to determine when or how they would pass this threshold.

Evidence chasms

According to the report, it is clear that more research is needed to understand the prevalence of different risks and how much they vary across different regions of the world, especially in regions such as Asia, Africa and Latin America that are rapidly digitising. 

“There is a lack of evidence on: how to measure the severity, prevalence, and timeframe of emerging risks; the extent to which these risks can be mitigated in real-world contexts; and how to effectively encourage or enforce mitigation adoption across diverse actors,” it says.

“Certain risk mitigations are growing in popularity, but more research is needed to understand how robust risk mitigations and safeguards are in practice for different communities and AI actors (including for small and medium-sized enterprises).

“Further, risk management efforts currently vary highly across leading AI companies,” it continues. “It has been argued that developers’ incentives are not well-aligned with thorough risk assessment and management.”

The report notes that while AI companies have made a number of voluntary commitments by tech firms – including the Frontier AI Safety Commitments voluntarily made by AI firms and the Seoul Declaration for safe, innovative and inclusive AI signed by governments at the AI summit in Seoul – there is a further evidence gap around “the degree to which different voluntary commitments are being met, what obstacles companies face in adhering fully to commitments, and how they are integrating … safety frameworks into broader AI risk management practices”.

The report adds that key challenges include determining how to prioritise the diverse risks posed by general-purpose AI, clarifying which actors are best positioned to mitigate them, and understanding the incentives and constraints that shape each of their actions.

“Evidence indicates that policymakers currently have limited access to information about how AI developers and deployers are testing, evaluating and monitoring emerging risks, and about the effectiveness of different mitigation practices,” it says.

While the 2025 safety report goes into more detail on risks around AI-related discrimination and its propensity to reproduce negative social biases, the 2026 report only touches on this briefly, noting that “some researchers have argued that most technical approaches to pluralistic alignment fail to address, and potentially distract from, deeper challenges, such as systematic biases, social power dynamics, and the concentration of wealth and influence”.

Although the 2025 report notes “a holistic and participatory approach that includes a variety of perspectives and stakeholders is essential to mitigate bias”, the 2026 report only says that open source approaches are critical to “enabling global majority participation in AI development”.

“Without such access, communities in low-resource regions risk exclusion from AI’s benefits,” it says, adding that allowing downstream developers to fine-tune models for diverse applications that, for example, adapt them for under-resourced minority languages or optimise performance for specific purposes “can allow more people and communities to use and benefit from AI than would otherwise be possible”.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How Technology Is Advancing Medical Imaging Services How Technology Is Advancing Medical Imaging Services
Next Article TikTok-Linked AI Video Tool Debuts With a Catch for the US TikTok-Linked AI Video Tool Debuts With a Catch for the US
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Landlords Fail to Protect Victim of Anti-Palestinian Harassment – Knock LA
Landlords Fail to Protect Victim of Anti-Palestinian Harassment – Knock LA
Computing
GOP senators grill telecom giants over phone record subpoenas
GOP senators grill telecom giants over phone record subpoenas
News
World model startup Runway closes 5M funding round –  News
World model startup Runway closes $315M funding round – News
News
MEXC Earn Achieves Dual-Scale Growth in 2025: 64% Users, 43% AUM | HackerNoon
MEXC Earn Achieves Dual-Scale Growth in 2025: 64% Users, 43% AUM | HackerNoon
Computing

You Might also Like

GOP senators grill telecom giants over phone record subpoenas
News

GOP senators grill telecom giants over phone record subpoenas

0 Min Read
World model startup Runway closes 5M funding round –  News
News

World model startup Runway closes $315M funding round – News

5 Min Read
Fitbit’s personal health coach hits more wrists with expansion
News

Fitbit’s personal health coach hits more wrists with expansion

2 Min Read
Ayaneo’s new Windows handheld will cost up to ,299 with maxed out specs
News

Ayaneo’s new Windows handheld will cost up to $4,299 with maxed out specs

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?