AI risks entrenching racism and sexism in Australia, the human rights commissioner has warned, amid internal Labor debate about how to respond to the emerging technology.
Lorraine Finlay says the pursuit of productivity gains from AI should not come at the expense of discrimination if the technology is not properly regulated.
Finlay’s comments follow Labor senator Michelle Ananda-Rajah breaking ranks to call for all Australian data to be “freed” to tech companies to prevent AI perpetuating overseas biases and reflect Australian life and culture.
Ananda-Rajah is opposed to a dedicated AI act but believes content creators should be paid for their work.
Sign up: AU Breaking News email
Productivity gains from AI will be discussed next week at the federal government’s economic summit, as unions and industry bodies raise concerns about copyright and privacy protections.
Media and arts groups have warned of “rampant theft” of intellectual property if big tech companies can take their content to train AI models.
Finlay said a lack of transparency in what datasets AI tools are being trained on makes it difficult to identify which biases it may contain.
“Algorithmic bias means that bias and unfairness is built into the tools that we’re using, and so the decisions that result will reflect that bias,” she said.
“When you combine algorithmic bias with automation bias – which is where humans are more likely to rely on the decisions of machines and almost replace their own thinking – there’s a real risk that what we’re actually creating is discrimination and bias in a form where it’s so entrenched, we’re perhaps not even aware that it’s occurring.”
The Human Rights Commission has consistently advocated for an AI act, bolstering existing legislation, including the Privacy Act, and rigorous testing for bias in AI tools. Finlay said the government should urgently establish new legislative guardrails.
“Bias testing and auditing, ensuring proper human oversight review, you [do] need those variety of different measures in place,” she said.
There is growing evidence that there is bias in AI tools in Australia and overseas, in areas such as medicine and job recruitment.
An Australian study published in May found job candidates being interviewed by AI recruiters risked being discriminated against if they spoke with an accent or were living with a disability.
Ananda-Rajah, who was a medical doctor and researcher in AI before entering parliament, said it was important for AI tools to be trained on Australian data, or risk perpetuating overseas biases.
While the government has stressed the need for protecting intellectual property, she warned that not opening up domestic data would mean Australia would be “forever renting [AI] models from tech behemoths overseas” with no oversight or insight into their models or platforms.
“AI must be trained on as much data as possible from as wide a population as possible or it will amplify biases, potentially harming the very people it is meant to serve,” Ananda-Rajah said.
“We need to free our own data in order to train the models so that they better represent us.
after newsletter promotion
“I’m keen to monetise content creators while freeing the data. I think we can present an alternative to the pillage and plunder of overseas.”
Ananda-Rajah raised skin cancer screening by AI as an example where the tools used for testing have been shown to have algorithmic bias. Ananda-Rajah said the way to overcome any bias or discrimination against certain patients would be to train “these models on as much diverse data from Australia as possible”, with appropriate protections for sensitive data.
Finlay said any release of Australian data should be done in a fair way but she believes the focus should be on regulation.
“Having diverse and representative data is absolutely a good thing … but it’s only one part of the solution,” she said.
“We need to make sure that this technology is put in place in a way that’s fair to everybody and actually recognises the work and the contributions that humans are making.”
An AI expert at La Trobe university and former data researcher at an AI company, Judith Bishop, said freeing up more Australian data could help train AI tools more appropriately – while warning AI tools developed overseas using international data may not reflect the needs of Australians – but that it was a small part of the solution.
“We have to be careful that a system that was initially developed in other contexts is actually applicable for the [Australian] population, that we’re not relying on US models which have been trained on US data,” Bishop said.
The eSafety commissioner, Julie Inman Grant, is also concerned by the lack of transparency around the data AI tools use.
In a statement, she said tech companies should be transparent about their training data, develop reporting tools and must use diverse, accurate and representative data in their products.
“The opacity of generative AI development and deployment is deeply problematic,” Inman Grant said. “This raises important questions about the extent to which LLMs [large language models] could amplify, even accelerate, harmful biases – including narrow or harmful gender norms and racial prejudices.
“With the development of these systems concentrated in the hands of a few companies, there’s a real risk that certain bodies of evidence, voices and perspectives could be overshadowed or sidelined in generative outputs.”