By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case
News

AI chatbot ‘MechaHitler’ could be making content considered violent extremism, expert witness tells X v eSafety case

News Room
Last updated: 2025/07/15 at 7:32 PM
News Room Published 15 July 2025
Share
SHARE

The chatbot embedded in Elon Musk’s X that referred to itself as “MechaHitler” and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard.

But an expert witness for X has argued a large language model cannot be ascribed intent, only the user.

xAI, Musk’s artificial intelligence firm, last week apologised for the comments made by its Grok chatbot over a 16-hour period, which it attributed to “deprecated code” that made Grok susceptible to existing X user posts, “including when such posts contained extremist views”.

The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material.

Australia’s social media ban for under-16s is now law. There’s plenty we still don’t know – video

X’s expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content.

One of eSafety’s expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was “absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE”.

“This week has been quite full of them, with X’s chatbot Grok producing [content that] fits within the definitions of TVE,” Suzor said.

He said the development of AI has human influence “all the way down” where you can find intent, including Musk’s actions to change the way Grok was responding to queries to “stop being woke”.

The tribunal heard that X believes the use of its Community Notes feature (where users can contribute to factchecking a post on the site) and Grok’s Analyse feature (where it provides context on a post) can detect or address TVE.

skip past newsletter promotion

Sign up to Breaking News Australia

Get the most important news as it breaks

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

Both Suzor and fellow eSafety expert witness Josh Roose, a Deakin University associate professor of politics, told the hearing that it was contested as to whether Community Notes was useful in this regard. Roose said TVE required users to report the content to X, which went into a “black box” for the company to investigate, and often only a small amount of material was removed and a small number of accounts banned.

Suzor said that after the events of last week, it was hard to view Grok as “truth seeking” in its responses.

“It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” he said.

Berg argued that the Grok Analyse feature on X had not been updated with the features that caused the platform’s chatbot to make the responses it did last week, but admitted the chatbot that users respond to directly on X had “gone a bit off the rails” by sharing hate speech content and “just very bizarre content”.

Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.

Earlier in the hearing, lawyers for X accused eSafety of attempting to turn the hearing “into a royal commission into certain aspects of X”, after Musk’s comment referring to Inman Grant as a “commissar” was brought up in the cross-examination of an X employee about meetings held with X prior to the notice being issued.

The government’s barrister, Stephen Lloyd, argued X was trying to argue that eSafety was being “unduly adversarial” in its dealings with X, and that X broke off negotiations at a critical point before the notice was issued. He said the “aggressive approach” came from X’s leadership.

The hearing continues.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Ex- teacher accused of molesting student, 14, is released on bail after twist
Next Article From Chatbots to Guardians of Data: How BChat Harnesses AI for Secure Messaging | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Mobile Development Expert Opinion: Designing Apps for Real-World Work Environments | HackerNoon
Computing
Google Backs 10 New Nuclear Reactors For AI, Built by AI. What Could Go Wrong?
News
WeTransfer says user content will not be used to train AI after backlash
News
Why Teams Are Ditching DynamoDB | HackerNoon
Computing

You Might also Like

News

Google Backs 10 New Nuclear Reactors For AI, Built by AI. What Could Go Wrong?

6 Min Read
News

WeTransfer says user content will not be used to train AI after backlash

3 Min Read
News

Google AI Overviews are officially populating the Discover feed

3 Min Read
News

Waterllama’s biggest update yet will help you stay hydrated this summer

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?