By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Use of AI could worsen racism and sexism in Australia, human rights commissioner warns
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Use of AI could worsen racism and sexism in Australia, human rights commissioner warns
News

Use of AI could worsen racism and sexism in Australia, human rights commissioner warns

News Room
Last updated: 2025/08/13 at 3:59 AM
News Room Published 13 August 2025
Share
SHARE

AI risks entrenching racism and sexism in Australia, the human rights commissioner has warned, amid internal Labor debate about how to respond to the emerging technology.

Lorraine Finlay says the pursuit of productivity gains from AI should not come at the expense of discrimination if the technology is not properly regulated.

Finlay’s comments follow Labor senator Michelle Ananda-Rajah breaking ranks to call for all Australian data to be “freed” to tech companies to prevent AI perpetuating overseas biases and reflect Australian life and culture.

Ananda-Rajah is opposed to a dedicated AI act but believes content creators should be paid for their work.

Sign up: AU Breaking News email

Productivity gains from AI will be discussed next week at the federal government’s economic summit, as unions and industry bodies raise concerns about copyright and privacy protections.

Media and arts groups have warned of “rampant theft” of intellectual property if big tech companies can take their content to train AI models.

Finlay said a lack of transparency in what datasets AI tools are being trained on makes it difficult to identify which biases it may contain.

“Algorithmic bias means that bias and unfairness is built into the tools that we’re using, and so the decisions that result will reflect that bias,” she said.

The human rights commissioner, Lorraine Finlay. Photograph: Mick Tsikas/AAP

“When you combine algorithmic bias with automation bias – which is where humans are more likely to rely on the decisions of machines and almost replace their own thinking – there’s a real risk that what we’re actually creating is discrimination and bias in a form where it’s so entrenched, we’re perhaps not even aware that it’s occurring.”

The Human Rights Commission has consistently advocated for an AI act, bolstering existing legislation, including the Privacy Act, and rigorous testing for bias in AI tools. Finlay said the government should urgently establish new legislative guardrails.

“Bias testing and auditing, ensuring proper human oversight review, you [do] need those variety of different measures in place,” she said.

There is growing evidence that there is bias in AI tools in Australia and overseas, in areas such as medicine and job recruitment.

An Australian study published in May found job candidates being interviewed by AI recruiters risked being discriminated against if they spoke with an accent or were living with a disability.

Ananda-Rajah, who was a medical doctor and researcher in AI before entering parliament, said it was important for AI tools to be trained on Australian data, or risk perpetuating overseas biases.

While the government has stressed the need for protecting intellectual property, she warned that not opening up domestic data would mean Australia would be “forever renting [AI] models from tech behemoths overseas” with no oversight or insight into their models or platforms.

“AI must be trained on as much data as possible from as wide a population as possible or it will amplify biases, potentially harming the very people it is meant to serve,” Ananda-Rajah said.

“We need to free our own data in order to train the models so that they better represent us.

skip past newsletter promotion

Sign up to Breaking News Australia

Get the most important news as it breaks

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

“I’m keen to monetise content creators while freeing the data. I think we can present an alternative to the pillage and plunder of overseas.”

Ananda-Rajah raised skin cancer screening by AI as an example where the tools used for testing have been shown to have algorithmic bias. Ananda-Rajah said the way to overcome any bias or discrimination against certain patients would be to train “these models on as much diverse data from Australia as possible”, with appropriate protections for sensitive data.

Finlay said any release of Australian data should be done in a fair way but she believes the focus should be on regulation.

“Having diverse and representative data is absolutely a good thing … but it’s only one part of the solution,” she said.

“We need to make sure that this technology is put in place in a way that’s fair to everybody and actually recognises the work and the contributions that humans are making.”

An AI expert at La Trobe university and former data researcher at an AI company, Judith Bishop, said freeing up more Australian data could help train AI tools more appropriately – while warning AI tools developed overseas using international data may not reflect the needs of Australians – but that it was a small part of the solution.

“We have to be careful that a system that was initially developed in other contexts is actually applicable for the [Australian] population, that we’re not relying on US models which have been trained on US data,” Bishop said.

The eSafety commissioner, Julie Inman Grant, is also concerned by the lack of transparency around the data AI tools use.

In a statement, she said tech companies should be transparent about their training data, develop reporting tools and must use diverse, accurate and representative data in their products.

“The opacity of generative AI development and deployment is deeply problematic,” Inman Grant said. “This raises important questions about the extent to which LLMs [large language models] could amplify, even accelerate, harmful biases – including narrow or harmful gender norms and racial prejudices.

“With the development of these systems concentrated in the hands of a few companies, there’s a real risk that certain bodies of evidence, voices and perspectives could be overshadowed or sidelined in generative outputs.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Man, 60, gave himself rare condition after going to ChatGPT for diet advice
Next Article (Updated weekly) 2025 X (Twitter) updates, news, and features 
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Another Pixel 10 leak points to wireless Qi2 charging
News
SR-IOV Will Only Be Supported On Intel Arc Pro Graphics Cards
Computing
Kansas set for major football stadium upgrades after $300 million gift by icon
News
Fuel leak prompts Openreach full-fibre broadband upgrade | Computer Weekly
News

You Might also Like

News

Another Pixel 10 leak points to wireless Qi2 charging

1 Min Read
News

Kansas set for major football stadium upgrades after $300 million gift by icon

4 Min Read
News

Fuel leak prompts Openreach full-fibre broadband upgrade | Computer Weekly

4 Min Read
News

The Simple Method I Use to Declutter Any Room in One Afternoon

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?