By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Meet the AI workers who tell their friends and family to stay away from AI
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Meet the AI workers who tell their friends and family to stay away from AI
News

Meet the AI workers who tell their friends and family to stay away from AI

News Room
Last updated: 2025/11/22 at 9:53 AM
News Room Published 22 November 2025
Share
Meet the AI workers who tell their friends and family to stay away from AI
SHARE

Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence. As an AI worker on Amazon Mechanical Turk – a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output – Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.

Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not. When she was presented with a tweet that read “Listen to that mooncricket sing”, she almost clicked on the “no” button before deciding to check the meaning of the word “mooncricket”, which, to her surprise, was a racial slur against Black Americans.

“I sat there considering how many times I may have made the same mistake and not caught myself,” said Pawloski.

The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral. How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?

After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them.

“It’s an absolute no in my house,” said Pawloski, referring to how she doesn’t let her teenage daughter use tools like ChatGPT. And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is. Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what she’s doing could be used to hurt people – many times, she says, the answer is yes.

A statement from Amazon said that workers can choose which tasks to complete at their discretion and review a task’s details before accepting it. Requesters set the specifics of any given task, such as allotted time, pay and instruction levels, according to Amazon.

“Amazon Mechanical Turk is a marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs,” said Montana MacLachlan, an Amazon spokesperson.

Pawloski isn’t alone. A dozen AI raters, workers who check an AI’s responses for accuracy and groundedness, told the Guardian that, after becoming aware of the way chatbots and image generators function and just how wrong their output can be, they have begun urging their friends and family not to use generative AI at all – or at least trying to educate their loved ones on using it cautiously. These trainers work on a range of AI models – Google’s Gemini, Elon Musk’s Grok, other popular models, and several smaller or lesser-known bots.

One worker, an AI rater with Google who evaluates the responses generated by Google Search’s AI Overviews, said that she tries to use AI as sparingly as possible, if at all. The company’s approach to AI-generated responses to questions of health, in particular, gave her pause, she said, requesting anonymity for fear of professional reprisal. She said she observed her colleagues evaluating AI-generated responses to medical matters uncritically and was tasked with evaluating such questions herself, despite a lack of medical training.

At home, she has forbidden her 10-year-old daughter from using chatbots. “She has to learn critical thinking skills first or she won’t be able to tell if the output is any good,” the rater said.

“Ratings are just one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models,” a statement from Google reads. “We also have a range of strong protections in place to surface high quality information across our products.”

Bot watchers sound the alarm

These people are part of a global workforce of tens of thousands who help chatbots sound more human. When checking AI responses, they also try their best to ensure that a chatbot doesn’t spout inaccurate or harmful information.

When the people who make AI seem trustworthy are those who trust it the least, however, experts believe it signals a much larger issue.

“It shows there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored,” said Alex Mahadevan, director of MediaWise at Poynter, a media literacy program. “So this means when we see the final [version of the] chatbot, we can expect the same type of errors they’re experiencing. It does not bode well for a public that is increasingly going to LLMs for news and information.”

AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality. Brook Hansen, an AI worker on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools. For her, the biggest turning point was realizing how little support the people training these systems receive.

“We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks,” said Hansen, who has been doing data work since 2010 and has had a part in training some of Silicon Valley’s most popular AI models. “If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.”

Dispensing false information in a confident tone, rather than offering no answer when none is readily available, is a major flaw of generative AI, experts say. An audit of the top 10 generative AI models including ChatGPT, Gemini and Meta’s AI by the media literacy non-profit NewsGuard revealed that the non-response rates of chatbots went down from 31% in August 2024 to 0% in August 2025. At the same time, the chatbots’ likelihood of repeating false information almost doubled from 18% to 35%, NewsGuard found. None of the companies responded to NewsGuard’s request for a comment at the time.

“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable,” said another Google AI rater, requesting anonymity due to a nondisclosure agreement she has signed with the contracting company. She warns people about using it and echoed another rater’s point about people with only cursory knowledge being tasked with medical questions and sensitive ethical ones, too. “This is not an ethical robot. It’s just a robot.”

skip past newsletter promotion

A weekly dive in to how technology is shaping our lives

Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on .com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

“We joke that [chatbots] would be great if we could get them to stop lying,” said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements.

‘Garbage in, garbage out’

Another AI rater who started his journey rating responses for Google’s products in early 2024 began to feel he couldn’t trust AI around six months into the job. He was tasked with stumping the model – meaning he had to ask Google’s AI various questions that would expose its limitations or weaknesses. Having a degree in history, this worker asked the model historical questions for the task.

“I asked it about the history of the Palestinian people, and it wouldn’t give me an answer no matter how I rephrased the question,” recalled this worker, requesting anonymity, having signed a nondisclosure agreement. “When I asked it about the history of Israel, it had no problems giving me a very extensive rundown. We reported it, but nobody seemed to care at Google.” When asked specifically about the situation the rater described, Google did not issue a statement.

For this Google worker, the biggest concern with AI training is the feedback given to AI models by raters like him. “After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that,” he said. He used the term “garbage in, garbage out”, a principle in computer programming which explains that if you feed bad or incomplete data into a technical system, then the output would also have the same flaws.

The rater avoids using generative AI and has also “advised every family member and friend of mine to not buy newer phones that have AI integrated in them, to resist automatic updates if possible that add AI integration, and to not tell AI anything personal”, he said.

Fragile, not futuristic

Whenever the topic of AI comes up in a social conversation, Hansen reminds people that AI is not magic – explaining the army of invisible workers behind it, the unreliability of the information and how environmentally damaging it is.

“Once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – you stop seeing AI as futuristic and start seeing it as fragile,” said Adio Dinika, who studies the labor behind AI at the Distributed AI Research Institute, about people who work behind the scenes. “In my experience it’s always people who don’t understand AI who are enchanted by it.”

The AI workers who spoke to the Guardian said they are taking it upon themselves to make better choices and create awareness around them, particularly emphasizing the idea that AI, in Hansen’s words, “is only as good as what’s put into it, and what’s put into it is not always the best information”. She and Pawloski gave a presentation in May at the Michigan Association of School Boards spring conference. In a room full of school board members and administrators from across the state, they spoke about the ethical and environmental impacts of artificial intelligence, hoping to spark a conversation.

“Many attendees were shocked by what they learned, since most had never heard about the human labor or environmental footprint behind AI,” said Hansen. “Some were grateful for the insight, while others were defensive or frustrated, accusing us of being ‘doom and gloom’ about technology they saw as exciting and full of potential.”

Pawloski compares AI ethics to that of the textile industry: when people didn’t know how cheap clothes were made, they were happy to find the best deal and save a few bucks. But as the stories of sweatshops started coming out, consumers had a choice and knew they should be asking questions. She believes it’s the same for AI.

“Where does your data come from? Is this model built on copyright infringement? Were workers fairly compensated for their work?” she said. “We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article LG OLED TV: The Brightest Screen at an Unbeatable Price for Black Friday LG OLED TV: The Brightest Screen at an Unbeatable Price for Black Friday
Next Article Linux Device Trees For Cancelled Products? Don’t “Waste Time” Linux Device Trees For Cancelled Products? Don’t “Waste Time”
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Bezos is back in startup mode, Amazon gets weird again, and the great old-car tech retrofit debate
Bezos is back in startup mode, Amazon gets weird again, and the great old-car tech retrofit debate
Computing
You Can Store Your Car Keys On Your iPhone – Here’s How – BGR
You Can Store Your Car Keys On Your iPhone – Here’s How – BGR
News
Walmart’s Black Friday sale is here! 45+ best deals on TVs, appliances, and more
Walmart’s Black Friday sale is here! 45+ best deals on TVs, appliances, and more
News
Chinese EV makers “very welcome” to open plants in France, says minister · TechNode
Chinese EV makers “very welcome” to open plants in France, says minister · TechNode
Computing

You Might also Like

You Can Store Your Car Keys On Your iPhone – Here’s How – BGR
News

You Can Store Your Car Keys On Your iPhone – Here’s How – BGR

4 Min Read
Walmart’s Black Friday sale is here! 45+ best deals on TVs, appliances, and more
News

Walmart’s Black Friday sale is here! 45+ best deals on TVs, appliances, and more

1 Min Read
Copy that: Android 17 may add a ‘Universal Clipboard’ for Android PCs
News

Copy that: Android 17 may add a ‘Universal Clipboard’ for Android PCs

13 Min Read
Las Vegas First Responders Lean on AT&T's FirstNet to Stay Connected During the F1 Race
News

Las Vegas First Responders Lean on AT&T's FirstNet to Stay Connected During the F1 Race

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?