By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Empower Users and Protect Against GenAI Data Loss
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Empower Users and Protect Against GenAI Data Loss
Computing

Empower Users and Protect Against GenAI Data Loss

News Room
Last updated: 2025/06/06 at 4:16 PM
News Room Published 6 June 2025
Share
SHARE

Jun 06, 2025The Hacker NewsArtificial Intelligence / Zero Trust

When generative AI tools became widely available in late 2022, it wasn’t just technologists who paid attention. Employees across all industries immediately recognized the potential of generative AI to boost productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation before it—file sharing, cloud storage and collaboration platforms—AI landed in the enterprise not through official channels, but through the hands of employees eager to work smarter.

Faced with the risk of sensitive data being fed into public AI interfaces, many organizations responded with urgency and force: They blocked access. While understandable as an initial defensive measure, blocking public AI apps is not a long-term strategy—it’s a stopgap. And in most cases, it’s not even effective.

Shadow AI: The Unseen Risk

The Zscaler ThreatLabz team has been tracking AI and machine learning (ML) traffic across enterprises, and the numbers tell a compelling story. In 2024 alone, ThreatLabz analyzed 36 times more AI and ML traffic than in the previous year, identifying over 800 different AI applications in use.

Blocking has not stopped employees from using AI. They email files to personal accounts, use their phones or home devices, and capture screenshots to input into AI systems. These workarounds move sensitive interactions into the shadows, out of view from enterprise monitoring and protections. The result? A growing blind spot is known as Shadow AI.

Blocking unapproved AI apps may make usage appear to drop to zero on reporting dashboards, but in reality, your organization isn’t protected; it’s just blind to what’s actually happening.

Lessons From SaaS Adoption

We’ve been here before. When early software as a service tool emerged, IT teams scrambled to control the unsanctioned use of cloud-based file storage applications. The answer wasn’t to ban file sharing though; rather it was to offer a secure, seamless, single-sign-on alternative that matched employee expectations for convenience, usability, and speed.

However, this time around the stakes are even higher. With SaaS, data leakage often means a misplaced file. With AI, it could mean inadvertently training a public model on your intellectual property with no way to delete or retrieve that data once it’s gone. There’s no “undo” button on a large language model’s memory.

Visibility First, Then Policy

Before an organization can intelligently govern AI usage, it needs to understand what’s actually happening. Blocking traffic without visibility is like building a fence without knowing where the property lines are.

We’ve solved problems like these before. Zscaler’s position in the traffic flow gives us an unparalleled vantage point. We see what apps are being accessed, by whom and how often. This real-time visibility is essential for assessing risk, shaping policy and enabling smarter, safer AI adoption.

Next, we’ve evolved how we deal with policy. Lots of providers will simply give the black-and-white options of “allow” or “block.” The better approach is context-aware, policy-driven governance that aligns with zero-trust principles that assume no implicit trust and demand continuous, contextual evaluation. Not every use of AI presents the same level of risk and policies should reflect that.

For example, we can provide access to an AI application with caution for the user or allow the transaction only in browser-isolation mode, which means users aren’t able to paste potentially sensitive data into the app. Another approach that works well is redirecting users to a corporate-approved alternative app which is managed on-premise. This lets employees reap productivity benefits without risking data exposure. If your users have a secure, fast, and sanctioned way to use AI, they won’t need to go around you.

Last, Zscaler’s data protection tools mean we can allow employees to use certain public AI apps, but prevent them from inadvertently sending out sensitive information. Our research shows over 4 million data loss prevention (DLP) violations in the Zscaler cloud, representing instances where sensitive enterprise data—such as financial data, personally identifiable information, source code, and medical data—was intended to be sent to an AI application, and that transaction was blocked by Zscaler policy. Real data loss would have occurred in these AI apps without Zscaler’s DLP enforcement.

Balancing Enablement With Protection

This isn’t about stopping AI adoption—it’s about shaping it responsibly. Security and productivity don’t have to be at odds. With the right tools and mindset, organizations can achieve both: empowering users and protecting data.

Learn more at zscaler.com/security

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article AWS Introduces Open Source Model Context Protocol Servers for ECS, EKS, and Serverless
Next Article Cybercriminals Are Hiding Malicious Web Traffic in Plain Sight
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

What Is DigitalOcean? Cloud Platform Explained
News
Why Hackers ❤️ Your Expired Domains and Outdated JavaScript | HackerNoon
Computing
The stylish Bose SoundLink Max drops to its second-best price with Amazon's exclusive sale
News
Global Venture Funding Slowed In May While Startup M&A Picked Up, Led By Large OpenAI Acquisitions
News

You Might also Like

Computing

Why Hackers ❤️ Your Expired Domains and Outdated JavaScript | HackerNoon

6 Min Read
Computing

6 Tools to Boost Agency Recurring Revenue (and the Smartest Models to Use Them With)

18 Min Read
Computing

HPE Issues Security Patch for StoreOnce Bug Allowing Remote Authentication Bypass

3 Min Read
Computing

Malicious PyPI, npm, and Ruby Packages Exposed in Ongoing Open-Source Supply Chain Attacks

10 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?