By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Terrorist potential of generative AI ‘purely theoretical’ | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Terrorist potential of generative AI ‘purely theoretical’ | Computer Weekly
News

Terrorist potential of generative AI ‘purely theoretical’ | Computer Weekly

News Room
Last updated: 2025/07/17 at 9:10 AM
News Room Published 17 July 2025
Share
SHARE

Generative artificial intelligence (GenAI) systems could assist terrorists in disseminating propaganda and preparing for attacks, according to the UK’s terror advisor, but the level of the threat remains “purely theoretical” without further evidence of its use in practice.

In his latest annual report, Jonathan Hall, the government’s independent reviewer of terrorism legislation, warned that while GenAI systems have the potential to be exploited by terrorists, how effective the technology will be in this context, and what to do about it, is currently an “open question”.

Commenting on the potential for GenAI to be deployed in service of a terror group’s propaganda activities, for example, Hall explained how it could be used to significantly speed up its production and amplify its dissemination, enabling terrorists to create easily sharable images, narratives and forms of messaging with far fewer resources or constraints.

However, he also noted that terrorists “flooding” the information environment with AI-generated content is not a given, and that take-up by groups could be varied as a result of its potential to undermine their messaging.

“Depending on the importance of authenticity, the very possibility that text or image has been AI-generated may undermine the message. Reams of spam-like propaganda may prove a turn-off,” he said, adding that some terror groups like Al-Queda, which “place a premium on authentic messages from senior leaders”, may avoid it and be reluctant to delegate propaganda functions to a bot.

“Conversely, it may be boom time for extreme right-wing forums, anti-Semites and conspiracy theorists who revel in creative nastiness.”

Similarly, on the technology’s potential to be used in attack planning, Hall said that while it has the potential to be of assistance, it is an open question as to how helpful current generative AI systems will be to terror groups in practice.

“In principle, GenAI is available to research key events and locations for targeting purposes, suggest methods of circumventing security and provide tradecraft on using or adapting weapons or terrorist cell-structure,” he said.

“Access to a suitable chatbot could dispense with the need to download online instructional material and make complex instructions more accessible … [while] GenAI could provide technical advice on avoiding surveillance or making knife-strikes more lethal, rather than relying on a specialist human contact.”

However, he added that “gains may be incremental rather than dramatic” and likely more relevant to lone attackers than organised groups.

Hall further added that while GenAI could be used to “extend attack methodology” – for example, via the identification and synthesis of harmful biological or chemical agents – this would also require the attacker to have prior expertise, skills and access to labs or equipment.

“GenAI’s effectiveness here has been doubted,” he said.

A similar point was made in the first International AI safety report, which was created by a global cohort of nearly 100 artificial intelligence experts in the wake of the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in 2023.

It said that while new AI models can create step-by-step guides for creating pathogens and toxins that surpass PhD-level expertise, potentially lowering the barriers to developing biological or chemical weapons, it remains a “technically complex” process, meaning the “practical utility for novices remains uncertain”.

A further risk identified by Hall is the use of AI in the process of online radicalisation via chatbots, where he said the one-to-one interactions between the human and machine could create “a closed loop of terrorist radicalisation … most relevantly for lonely and unhappy individuals already disposed towards nihilism or looking for extreme answers and lacking real-world or online counterbalance”.

However, he noted that even if a model has no guardrails and has been trained on data “sympathetic to terrorist narratives”, the outputs will depend largely on what the user asks it.

Potential solutions?

In terms of legal solutions, Hall highlighted the difficulty of preventing GenAI from being used to assist terrorism, noting that “upstream liability” for those involved in the development of these systems is limited, as models can be used so broadly for many different, unpredictable purposes.

Instead, he suggested introducing “tools-based liability”, which would target AI tools specifically designed to aid terrorist activities.

Hall said while the government should consider legislating against the creation or possession of computer programs designed to stir up racial or religious hatred, he acknowledged that it would be difficult to prove that programs were specifically designed for this purpose.

He added that while developers could be prosecuted under UK terror laws if they did indeed create a terrorism-specific AI model or chatbot, “it seems unlikely that GenAI tools will be created specifically for generating novel forms of terrorist propaganda – it is far more likely that the capabilities of powerful general models will be harnessed”.

“I can foresee immense difficulties in proving that a chatbot [or GenAI model] was designed to produce narrow terrorism content. The better course would be an offence of making … a computer program specifically designed to stir up hatred on the grounds of race, religion or sexuality.”

In his reflections, Hall acknowledged that it remains to be seen exactly how AI will be used by terrorists and that the situation remains “purely theoretical”.

“Some will say, plausibly, that there is nothing new to see. GenAI is just another form of technology and, as such, it will be exploited by terrorists, like vans,” he said. “Without evidence that the current legislative framework is inadequate, there is no basis for adapting or extending it to deal with purely theoretical use cases. Indeed, the absence of GenAI-enabled attacks could suggest the whole issue is overblown.”

Hall added that even if some form of regulation is needed to avoid future harms, it could be argued that criminal liability is the least suitable option, especially given the political imperative to harness AI as a force for economic growth and other public benefits.

“Alternatives to criminal liability include transparency reporting, voluntary industry standards, third-party auditing, suspicious activity reporting, licensing, bespoke solutions like AI-watermarking, restrictions on advertising, forms of civil liability, and regulatory obligations,” he said.

While Hall expressed uncertainty around the extent to which terror groups would adopt generative AI, he concluded that the most likely effect of the technology was a general “social degradation” promoted by the spread of online disinformation.

“Although remote from bombs, shootings or blunt-force attacks, poisonous misrepresentations about government motives or against target demographics could lay the foundations for polarisation, hostility and eventual real-world terrorist violence,” he said. “But there is no role for terrorism legislation here because any link between GenAI-related content and eventual terrorism would be too indirect.”

While not covered in the report, Hall did acknowledge there could be further “indirect impacts” of GenAI on terrorism, as it could lead to widespread unemployment and create an unstable social environment “more conducive to terrorism”.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Major mobile provider reveals FREE upgrade hours after service shut down
Next Article Funders commit $1B toward developing AI tools for frontline workers
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Score a Gift Card Along With a 20% Discount on This LG Gaming Monitor
News
This chic little charger solves the biggest problem with wall chargers
News
Astronomers witness dawn of new solar system for 1st time
News
Seagate presents its 30 TB HAMR hard drives
Mobile

You Might also Like

News

Score a Gift Card Along With a 20% Discount on This LG Gaming Monitor

4 Min Read
News

This chic little charger solves the biggest problem with wall chargers

2 Min Read

Astronomers witness dawn of new solar system for 1st time

6 Min Read
News

You can once again buy the AirPods 4 for less than $90

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?