By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Search This Phrase and You’ll Find Sensitive Corporate Docs Online
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Search This Phrase and You’ll Find Sensitive Corporate Docs Online
News

Search This Phrase and You’ll Find Sensitive Corporate Docs Online

News Room
Last updated: 2025/10/29 at 1:29 AM
News Room Published 29 October 2025
Share
SHARE

According to Cybernews, more than 40% of survey respondents said they share sensitive information, such as client data, financial information, and internal company documents, with AI tools, all without their employer’s knowledge. Recently, the team at PromptArmor, a cybersecurity consulting firm, reviewed 22 popular AI apps, including Claude, Perplexity, and Vercel V0, and found highly sensitive corporate data uploaded to AI chatbots on the public web, including AWS tokens, court filings, an Oracle Salary report that was marked as confidential, and an internal investment memo for a VC firm.

In a report shared with PCMag, PromptArmor researchers noted that anyone can access this type of data by entering the following into a search engine: “site:claude.ai + internal use only.” I tried the search prompt and saw several of the results mentioned in the report, along with some other interesting artifacts, such as a sales training module for a collaboration between fashion brands and even the first four chapters of a novel by an award-winning journalist:

(Credit: Google)

The data leak problem isn’t limited to Claude. Your shared Grok conversations could appear in Google’s search results. Meta AI also makes your shared conversations public (though the company recently added a warning prompt before users share chats to the Discover feed). In August, OpenAI removed ChatGPT conversations from Google search results, stating that indexing users’ conversations was an “experiment” that had come to an end.

The most interesting aspect of the Cybernews survey results is that most people (89%) acknowledged knowing that AI tools carry significant risks. Data breaches were the most commonly cited consequence. As the report pointed out, “It means employees use unapproved AI tools at work and share sensitive information, even though they know it may result in a data breach.” 

If data breach risks aren’t a good enough reason to think twice before using unauthorized AI tools at work, there are other compliance, legal, and regulatory issues to consider. Let’s dig deeper into the security and privacy risks associated with using AI for work, and what companies can do to train employees to use AI safely.


What’s Wrong With Using AI at Work?

When employees use AI tools that haven’t been properly tested by the company’s IT department, they expose the company to a range of liabilities. Unauthorized AI usage is cited as the reason for a $670,000 increase in data breach financial fallout, as stated in IBM’s 2025 report on data breach costs. I’ve compiled a non-exhaustive list of additional factors that employers (and corporate security teams) should consider when developing guidelines and policies for AI usage at work:

LLMs Are Vulnerable—And Hackers Know It

Security researchers regularly test popular chatbots, so it’s not surprising that they’ve already found vulnerabilities. In August, OpenAI patched a vulnerability that may have allowed hackers to force ChatGPT to leak victims’ email information. At the Black Hat cybersecurity conference that same month, researchers revealed how hackers could feed malicious prompts to Google Gemini using Google Calendar invitations. 

Google patched that vulnerability before the conference began; however, a similar vulnerability exists in other chatbots, including Copilot and Einstein. Microsoft fixed the flaw in August, while Salesforce patched it in early September. All of these responses came months after researchers reported the vulnerability in June. It’s also worth noting that the security holes were discovered and reported by researchers, not hackers. They’ve certainly found ways to get valuable data from AI chatbots, but they’re not sharing that information publicly.

AI Hallucinations Are Costing Companies Real Money

AI hallucinations are very real, and they cost companies a lot of money and reputational damage. In one case, an attorney was fined and reprimanded by a judge for citing more than 20 fake cases in a legal brief. The lawyer claimed he fed his pre-written brief through an AI tool to “enhance it,” then submitted the brief before reading it.

A September report from NBC noted that while outsourced workers have been replaced by AI at many businesses, human freelancers are finding work cleaning up AI-generated mistakes. In another recent example, the global financial consulting firm Deloitte refunded a six-figure sum to the Australian government after submitting an AI-generated report that cited several fake academic papers and quotes, along with other factual errors.

Workslop: When AI Output Looks Polished But Wastes Time

A team at Stanford’s Social Media Lab found that 40% of US-based full-time office employees reported receiving “workslop.” The term refers to AI-generated content that “masquerades as good work but lacks the substance to meaningfully advance a given task.” In short, workslop is a problem because it isn’t helpful. When employees waste billable hours fixing code, rewriting, or fact-checking AI-generated documents, notes, and sales figures, it nullifies the initial AI-assisted productivity boost. Lost productivity often results in lost profits.

Sometimes, workslop is a relatively benign email containing a telltale em dash; other times, it’s AI used in situations where it shouldn’t be, opening companies up to compliance and regulatory issues, as well as financial and legal repercussions. 


Newsletter Icon

Newsletter Icon

Get Our Best Stories!

Stay Safe With the Latest Security News and Updates


SecurityWatch Newsletter Image

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Claude’s Browser Extension Takes Screenshots While You Work

If you’re using AI-powered browser extensions or AI browsers, you’re also leaving the data door open for other security problems, like prompt-injection attacks or surreptitious, AI-enhanced data collection. An example of this is Claude AI’s browser extension for Chrome, which takes screenshots of your active browser tab while you use it, automatically dispelling any notions of workplace privacy while the extension is running. The tool won’t access sites in risky categories, including financial sites, which is good. That makes it a bit harder to inadvertently feed your banking details into the chatbot, but it’s incredibly bad for keeping corporate and clients’ private data safe. 

It’s worth noting that Claude’s browser extension is an experimental feature that is currently limited to select Max plan subscribers. My colleague, Ruben Circelli, also wrote an article about why you should never trust chatbots with important tasks, such as answering emails, crafting resumes, handling sensitive client data or purchase orders, creating factual documents, or even keeping corporate secrets safe.

Some AI Tools Collect Shockingly Detailed User Data

Earlier this year, I tested several AI chatbots to determine which one collects the most data from your devices. Of the ones I tested, ChatGPT, DeepSeek, and Qwen collected the most data. DeepSeek even collects your keystroke patterns.

Recommended by Our Editors

Copilot collected the least data because it’s integrated into Microsoft 365 and uses your data to generate contextual, work-appropriate answers. That’s how Copilot Studio meets various data protection standards, including FedRAMP, HIPAA, and SOC. ChatGPT uses the same OpenAI GPT model, but it also contributes to and draws from a public dataset. Unfortunately, a recent study revealed that less than 3% of employees prefer using Copilot, while an overwhelming majority prefer ChatGPT.


Want to Prevent AI Data Leaks? Start With a Smarter Policy

While it’s up to employees not to enter private company data into AI chatbots, it’s also important for companies to give employees training and guidelines for appropriate AI use at work. I do not use AI during my workday (unless I need to create phishing email examples), but I was surprised to see that the topic was missing from my employer’s annual compliance training this year. I’ve come up with a few ideas to help small business owners create AI policies to protect company data:

Build a Policy That Can Adapt

I recommend building a lot of flexibility into your company’s policy, as generative AI is evolving at a rapid pace, and each department within the company may utilize AI in a different way. For example, the IT department may need to use several different LLMs to complete a project, whereas your project management team may opt for a single tool. It’s a good idea for department heads to clearly state which tools employees can use, and note that other tools require written permission from management. Be ready to update your policies every few months.

Guard Your Sensitive Data

The policy should be firm about the types of data employees can enter into an AI tool that’s connected to the public web. Your policies will depend on whether your company uses third-party AI tools, such as the Claude AI browser extension mentioned above, or internal-use-only tools trained on your company’s data. These tools are usually developed and hosted internally, though they can be bought or licensed from other providers (who promise not to train their models on your company’s information). For example, PCMag offers Maggie, an AI tool based on Anthropic’s Claude AI and trained exclusively on our own reviews and stories. 

Train Teams on Safe AI Use

Give your team the best chance at blending productivity with privacy. Appoint AI experts in your company to a small steering committee (keep it under 10 people to facilitate open conversations) that can help employees get familiar with your custom AI tools. Ask the committee to develop best practices for the use of AI in your workplace, along with a list of approved AI chatbots and other tools. 

Make Privacy Part of Compliance

Nonprofits, such as the International Association of Privacy Professionals (IAPP), offer training in AI governance. You can also enroll employees in courses from cybersecurity firms like Fortra or the SANS Institute.


Learn How to Work With AI

If you want to learn how to use AI to achieve your productivity goals, refine your emails, or create video presentations, I recommend visiting PCMag’s All About AI hub, where you’ll find in-depth reviews of all of the major LLMs and information on how to use them.

About Our Expert

Kim Key

Kim Key

Senior Writer, Security


Experience

I review privacy tools like hardware security keys, password managers, private messaging apps, and ad-blocking software. I also report on online scams and offer advice to families and individuals about staying safe on the internet. Before joining PCMag, I wrote about tech and video games for CNN, Fanbyte, Mashable, The New York Times, and TechRadar. I also worked at CNN International, where I did field producing and reporting on sports that are popular with worldwide audiences.

In addition to the categories below, I exclusively cover ad blockers, authenticator apps, hardware security keys, and private messaging apps.

Read Full Bio

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How to choose the best server to host your web project
Next Article Super Micro and Dell suspected of exporting Nvidia chips to China · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The Galaxy S25 FE is the first sensibly-priced Samsung phone I’ve enjoyed using in years | Stuff
Gadget
Chinese action-adventure game ‘Longevity Yin and Yang’ reveals first gameplay trailer · TechNode
Computing
Tips to speed up your home connection
Software
Today's NYT Connections: Sports Edition Hints, Answers for Oct. 29 #401
News

You Might also Like

News

Today's NYT Connections: Sports Edition Hints, Answers for Oct. 29 #401

3 Min Read
News

Slice 28% Off the Price of the Corsair Scimitar Elite Gaming Mouse That Our Expert Rated as ‘Outstanding’

4 Min Read
News

October 28, 2025 – iPhone 18 rumors, iPad apps

1 Min Read
News

An AI tool promises to detect profanity, NSFW content in livestreams

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?