By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI-generated ‘poverty porn’ fake images being used by aid agencies
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI-generated ‘poverty porn’ fake images being used by aid agencies
News

AI-generated ‘poverty porn’ fake images being used by aid agencies

News Room
Last updated: 2025/10/20 at 4:26 AM
News Room Published 20 October 2025
Share
SHARE

AI-generated images of extreme poverty, children and sexual violence survivors are flooding stock photo sites and increasingly being used by leading health NGOs, according to global health professionals who have voiced concern over a new era of “poverty porn”.

“All over the place, people are using it,” said Noah Arnold, who works at Fairpicture, a Swiss-based organisation focused on promoting ethical imagery in global development. “Some are actively using AI imagery, and others, we know that they’re experimenting at least.”

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said: “The images replicate the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals.”

Alenichev has collected more than 100 AI-generated images of extreme poverty used by individuals or NGOs as part of social media campaigns against hunger or sexual violence. Images he shared with the Guardian show exaggerated, stereotype-perpetuating scenes: children huddled together in muddy water; an African girl in a wedding dress with a tear staining her cheek. In a comment piece published on Thursday in the Lancet Global Health, he argues these images amount to “poverty porn 2.0”.

While it is hard to quantify the prevalence of the AI-generated images, Alenichev and others say their use is on the rise, driven by concerns over consent and cost. Arnold said that US funding cuts to NGO budgets had made matters worse.

“It is quite clear that various organisations are starting to consider synthetic images instead of real photography, because it’s cheap and you don’t need to bother with consent and everything,” said Alenichev.

AI-generated images of extreme poverty now appear in their dozens on popular stock photo sites, including Adobe Stock Photos and Freepik, in response to queries such as “poverty”. Many bear captions such as “Photorealistic kid in refugee camp”; “Asian children swim in a river full of waste”; and “Caucasian white volunteer provides medical consultation to young black children in African village”. Adobe sells licences to the last two photos in that list for about £60.

“They are so racialised. They should never even let those be published because it’s like the worst stereotypes about Africa, or India, or you name it,” said Alenichev.

Joaquín Abela, CEO of Freepik, said the responsibility for using such extreme images lay with media consumers, and not with platforms such as his. The AI stock photos, he said, are generated by the platform’s global community of users, who can receive a licensing fee when Freepik’s customers choose to buy their images.

Freepik had attempted to curb biases it had found in other parts of its photo library, he said, by “injecting diversity” and trying to ensure gender balance into the photos of lawyers and CEOs hosted on the site.

But, he said, there was only so much his platform could do. “It’s like trying to dry the ocean. We make an effort, but in reality, if customers worldwide want images a certain way, there is absolutely nothing that anyone can do.”

A screen grab showing AI-generated images of ‘poverty’ on a stock photo site. Pictures such as these have raised concerns over biased imagery and stereotypes. Illustration: Freepik

In the past, leading charities have used AI-generated images as part of their communications strategies on global health. In 2023, the Dutch arm of UK charity Plan International released a video campaign against child marriage containing AI-generated images of a girl with a black eye, an older man and a pregnant teenager.

Last year, the UN posted a video on YouTube with AI-generated “re-enactments” of sexual violence in conflict, which included AI-generated testimony from a Burundian woman describing being raped by three men and left to die in 1993 during the country’s civil war. The video was removed after the Guardian contacted the UN for comment.

A UN Peacekeeping spokesperson said: “The video in question, which was produced over a year ago using a fast-evolving tool, has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity, blending real footage and near-real artificially generated content.

“The United Nations remains steadfast in its commitment to support victims of conflict-related sexual violence, including through innovation and creative advocacy.”

Arnold said the rising use of these AI images comes after years of debate in the sector around ethical imagery and dignified storytelling about poverty and violence. “Supposedly, it’s easier to take ready-made AI visuals that come without consent, because it’s not real people.”

Kate Kardol, an NGO communications consultant, said the images frightened her, and recalled earlier debates about the use of “poverty porn” in the sector.

“It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal,” she said.

Generative AI tools have long been found to replicate – and at times exaggerate – broader societal biases. The proliferation of biased images in global health communications may make the problem worse, said Alenichev, because the images could filter out into the wider internet and be used to train the next generation of AI models, a process which has been shown to amplify prejudice.

A spokesperson for Plan International said the NGO had, as of this year: “adopted guidance advising against using AI to depict individual children”, and said the 2023 campaign had used AI-generated imagery to safeguard “the privacy and dignity of real girls”.

Adobe declined to comment.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article La Birra Bar Brings Its Award-Winning Argentinian Burgers to the U.S. Winning Over South Florida
Next Article South Korea seeks to arrest dozens of online scam suspects repatriated from Cambodia
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

X doesn't want you leaving anymore – even when you click away
News
Quantifying the Right-Context Boundary of Authority in Language Models | HackerNoon
Computing
OnePlus reveals OnePlus 15 battery size, and it puts S25 Ultra to shame
News
Forget the MacBook Pro M5, the 2026 version could get a screen upgrade we’ve been longing for
Gadget

You Might also Like

News

X doesn't want you leaving anymore – even when you click away

5 Min Read
News

OnePlus reveals OnePlus 15 battery size, and it puts S25 Ultra to shame

2 Min Read
News

Software engineering foundations for the AI ​​native era

7 Min Read
News

Why Observability Matters (More!) with AI Applications

36 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?