By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Universities must lend their weight to combating AI disinformation
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Software > Universities must lend their weight to combating AI disinformation
Software

Universities must lend their weight to combating AI disinformation

News Room
Last updated: 2025/06/14 at 3:29 PM
News Room Published 14 June 2025
Share
SHARE

By the end of this year, about 4 billion citizens across more than 40 countries will have voted in elections.

Accordingly, the early months of 2024 saw a global outpouring of speculation about the democratic collapse that might be caused by artificial intelligence (AI)-enabled online disinformation. Most of the commentary focused on the potential for highly realistic deepfake video to deceive the public. Some predicted the “first deepfake elections”.

This was part of the “hype cycle” that history tells us all new technologies go through. Inflated early expectations of social and political impact – rose-tinted or, as here, doom-laden – are displaced over time by the realities of evidence and adaptation.

The important thing is to quickly get beyond the hype – and the fatalism and sense of powerlessness it can promote – and focus on the technology’s real and lasting effects. These are often substantial but subtle, complex and more gradually felt than forecast by early optimists and pessimists.

ADVERTISEMENT

The challenge for researchers across all disciplines, then, is to learn rapidly from events and help citizens and regulators pinpoint when, where and how AI makes a difference – positive or negative – to civic life.

In the event, there was no apparent deepfake crisis in the UK election, but this produced a narrative just as unhelpful as the doom-mongering. “Nothing to see here” quickly became the new vogue – just as revelations were emerging of some serious cases of AI-driven disinformation.

ADVERTISEMENT

During the campaign’s final weekend, investigative journalists at Australia’s ABC News uncovered a coordinated foreign disinformation campaign targeting UK citizens on Facebook with divisive, often racist material (some of it illegal, unlabelled paid advertisements). Fake, AI-generated images were common – showing, for example, groups of asylum seekers massing at the UK coast.

Facebook’s parent company, Meta, took it all down as Rishi Sunak issued a formal statement of concern. A government investigation was reportedly set up, but, by then, polling day had arrived.

Meanwhile, Germany’s main public service news organisation, ARD-aktuell, reported that similarly racist, anti-immigrant accounts on X were targeting the UK elections. The environmental campaign group Global Witness confirmed that automated X accounts were spreading divisive disinformation on climate change and migration, in posts viewed 150 million times. And two days after the UK vote, the Bureau of Investigative Journalism revealed that a Kremlin-backed network of fake news sites had targeted the UK, French and US campaigns.


Three ways to promote critical engagement with GenAI


Significantly, though, the much-feared deepfake videos – which, for now at least, remain difficult to produce – were largely absent from these influencing operations, illustrating that AI-generated prose, still images and audio could actually prove more consequential.

The network included sites that intelligence consultancy Recorded Future revealed in May as having used AI to “plagiarise, translate and edit content from mainstream media outlets, using prompt engineering to tailor content to specific audiences and introduce political bias”.

Meanwhile, at the start of the year, a canvassing call that used a synthetic version of Joe Biden’s voice disrupted the New Hampshire primary. A convincing fabricated audio clip of Sadiq Kahn affected spring’s London mayoral campaign. Equally convincing fake audio depicting health secretary Wes Streeting emerged during the UK general election.

ADVERTISEMENT

Much of that campaign’s AI-generated visual fakery, such as the material ABC uncovered, consisted of still images. But, as we have also seen over recent weeks in the US campaign, most of these are not even photo-realistic. Evidently, they can still elicit strong emotions, but the fact that they are instantly recognisable due to their digital-paint aesthetic is due to leading generative AI platforms’ efforts – initiated under pressure from fact checkers, citizens and emerging regulators – to restrict how they respond to user prompts.

These moves gathered momentum after February’s signing by major global tech companies of an AI Elections Accord. And though still highly imperfect and unevenly applied (for example on X’s Grok platform), they show how public pressure for regulatory guard rails can shape design choices that safeguard democracy. 

ADVERTISEMENT

In other words, the social contexts of new technologies change as organisations and people adapt to them. Agile, well-informed regulation is achievable and starting to emerge, and vigilance among public bodies, media organisations and policy wonks about electoral threats is increasing.

The UK Cabinet Office issued guidance on generative AI to electoral candidates and local officials. The government established a Joint Election Security Preparations Unit in early 2024. And during the campaign itself, a simple but effective Channel 4 Dispatches documentary highlighted deepfakes, further raising awareness. We’re not as susceptible as we once were.

Moreover, AI is starting to be used to promote accountability and fight fakery. While AI-driven online microtargeting has not yet taken off in election campaigns, the Labour Party experimented with Campaign Lab’s chatbot scripts to help canvassers communicate effectively with voters, using research by the anti-polarisation thinktank More in Common. And an Electoral Commission guidance bot helped candidates stay within the increasingly complex law regulating privacy and spending.

Similar tools are now used to help human fact checkers – at the UK’s Full Fact, for example. Meanwhile, evidence from the US suggests prose AI generators can help journalists provide sophisticated rapid responses to live televised debates.

Universities across the world must lend their weight to such efforts. They must sidestep the hype cycle to help regulators and communicators respond quickly and effectively to the threat of online disinformation in time for the next big year of elections.

Andrew Chadwick is professor of political communication and director of the Online Civic Culture Centre at Loughborough University. Nick Jennings is vice-chancellor of Loughborough and was the UK’s chief scientific adviser for national security from 2010 to 2015.

ADVERTISEMENT

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article AMD announces its new Epyc Venice processors
Next Article How to Launch a Professional Online Store with OpenCart?
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

We tested a crypto card; “spending crypto like cash” might be real
Computing
Travelers warned to check with airline after flights across America are canceled
News
Robot umpires to make All-Star Game debut, another step toward possible regular-season use in 2026
News
Google Pixel 10 Pro Fold rumors and everything you need to know
Gadget

You Might also Like

Software

Elon Musk’s Chatbot Grok Searches for His Views Before Answering Questions

5 Min Read
Software

X’s dominance ‘over’ as Bluesky becomes new hub for research

4 Min Read
Software

Why 1995 was the year the internet grew up

13 Min Read
Software

Building connections with AI industry is vital to keeping degrees relevant

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?