By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The humanities must have a role in overseeing AI ‘censorship’
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Software > The humanities must have a role in overseeing AI ‘censorship’
Software

The humanities must have a role in overseeing AI ‘censorship’

News Room
Last updated: 2025/07/09 at 10:38 AM
News Room Published 9 July 2025
Share
SHARE

In May 2025, xAI’s Grok 3 artificial intelligence chatbot began producing unsolicited references to violence against white people in South Africa, including the discredited narrative of “white genocide”. The company blamed an “unauthorized modification” to its programming that “violated xAI’s internal policies and core values”.

But it wasn’t the first time Grok’s programming had been changed. In February, it was revealed that Grok had been instructed to “ignore all sources that mention Elon Musk/Donald Trump spread misinformation”. This was quickly reversed, with xAI’s co-founder Igor Babuschkin blaming one employee who “hasn’t fully absorbed xAI’s culture yet”.

These incidents recall earlier controversies involving other AI models, too. One year previously, Google’s Gemini generated a historically inaccurate image of a German soldier from 1943, seemingly attempting to comply with contemporary diversity standards rather than historical accuracy. Following a public outcry, Demis Hassabis, CEO of Google DeepMind, promised to fix this matter “within weeks”. This “fix” included the temporary removal of the image generator.

AI companies must and do place guard rails around what topics their models will and won’t discuss, and the above mistakes were corrected promptly. But that is less likely to be true for harmful or misleading responses that fly under the radar yet may still mislead or reinforce biases.

ADVERTISEMENT

All of this has obvious implications for academics who use AI in their research, teaching or public outreach. One notable recent example involved researchers preparing materials for a conference on genocide studies. They attempted to generate a poster titled GenAI and Genocide Studies, only for the AI system to flag the key term, genocide, as inappropriate, and to suggest replacing it with the vague euphemism “G Studies”.

The Chinese chatbot DeepSeek’s real-time censorship of discussions relating to Tiananmen Square has been widely reported, but similar practices had already emerged in Western rivals. For instance, Gemini Advanced abruptly stopped digitising a Nazi German document midway through the process, citing an inability to complete the task.

ADVERTISEMENT

Such incidents highlight an inherent tension within AI-driven content moderation for historical research, with parallel implications across other disciplines. On the one hand, AI promises enhanced analytical capabilities, enabling us to process vast amounts of data with unprecedented speed and accuracy. On the other hand, its opaque and sometimes arbitrary filtering mechanisms threaten to create artificial gaps in our understanding of the world, its history and our place within it.

At least in DeepSeek’s case there was an open admission of censorship. Western AI companies make noble commitments, such as OpenAI’s declaration that its “primary fiduciary duty is to humanity”, Anthropic’s promise to build “systems that people can rely on” and xAI’s mission to “understand the universe”. But these values are compromised when their systems inadvertently distort academic inquiry through selective “censorship”. We have no idea how much suppression of historical and scholarly material is occurring, and in which cultural and political contexts.

Of course, it is extremely difficult to make finely balanced decisions that moderate outputs without censoring. That is why AI companies need expert academic input from a wide variety of relevant academic disciplines. A balanced approach requires the very qualities that academic scholarship, particularly within the humanities, can provide: nuance, breadth of perspectives and ethical clarity.

To be fair, companies are starting to recognise this imperative. OpenAI’s “red team network” of safety testers explicitly includes domain experts from a range of academic and professional fields, and Anthropic’s red team approach sandwiches human assessment between phases of automated testing, in an iterative loop.

ADVERTISEMENT

But which humans are involved? Anthropic’s June 2024 policy discusses “domain-specific, expert teaming” without any mention of academic contribution. And while OpenAI’s list of desired expertise includes a range of academic disciplines, it noticeably lacks the humanities. This creates vulnerability to accidental “censorship” in domains such as history, philosophy, language and the arts. It also creates a more pervasive vulnerability across all domains given that the humanities explore, among other things, what it is to be intelligent, whether artificially or naturally.

Ideally, representatives of all disciplines would be included on advisory and oversight boards, decision-making structures and red teams. But that would be no more practical in the humanities than in the sciences. Nevertheless, a thoughtfully assembled team with some humanities expertise – from a diverse range of disciplines, demographics and methodological traditions – is far preferable to teams whose reliance on too narrow a range of intellectual disciplines and ways of thinking can lead to the kind of “censorship” described above.

Transparency is essential. Developers should publish clear documentation outlining the criteria used for content filtering, along with provisions for override by verified researchers. Regular reviews and updates to these policies, informed by ongoing dialogue with subject matter experts, are also necessary to ensure that AI firms evolve their systems in step with scholarly needs and mitigate potential censorship in advance, rather than reacting after the event.

Keeping humans in the loop is vital. But to fully maximise AI’s benefits and minimise its risks, we must also keep the humanities in the loop.

ADVERTISEMENT

Lorna Waddington is an associate professor in the School of History and Richard de Blacquiere-Clarkson is an academic development consultant in the Learning Design Agency at the University of Leeds. They have established an international group to examine these issues in greater depth, with a particular focus on the role of the humanities in AI development and oversight. Those interested in joining are invited to contact them.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article AI video generators for YouTubers; one can spit-at-a-time content on a daily basis
Next Article Harness Wind Power and Smart Tech: BTC Miner Enables Passive Crypto Earnings
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Apple’s second-generation Vision Pro might launch this year
News
The Nespresso Creatista Plus is almost half price this Prime Day
Gadget
Iran threatens to assassinate Trump while sunbathing at Mar-a-Lago
News
The Trouble with Blaming Sci-Fi for Silicon Valley’s Obsessions | HackerNoon
Computing

You Might also Like

Software

University of San Francisco School of Law Backets first law program to full integrate ai, as anthropic goes big on education

3 Min Read
Software

No student should graduate without being taught AI, leaders told

3 Min Read
Software

Nvidia Backets First Company to Reach $ 4tn in Market Value

3 Min Read
Software

Samsung galaxy z flip 7 fe is the cheapest new samsung foldable

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?