The multinational retail chain where Katie Smith worked in data analytics and finance for six years was so concerned with controlling employees’ online activities that it blocked access to all generative artificial intelligence websites and locked down YouTube.
So Smith took matters into her own hands. She used a personal ChatGPT account to help with writing work emails, fine-tuned them using Grammarly Inc.’s English language writing assistant and tapped into the AI features of Canva Inc.’s graphic design platform to help with presentations. “Retail has never been a technology innovator,” she said.
Early last month, Smith switched jobs, becoming a data and AI strategy manager at a large consulting firm she asked not to identify. The experience has been like night and day.
“What they allow is incredible,” Smith said. “We’re very pro-AI. We give employees access to copilots and a lot of other capabilities in the AI ecosystem.”
Smith’s experience illustrates the phenomenon of “shadow AI,” or the use of AI tools by employees without the blessing or knowledge of the information technology organization. By many accounts, generative AI is already widely used in most organizations regardless of whether it’s permitted or not.
A recent survey by Prompt Security Inc. found that organizations use an average of 67 AI tools, with 90% lacking information technology department approval. Cap Gemini S.A. reported that more than 60% of software developers who use generative AI do so unofficially.
The phenomenon isn’t new. People have been bringing technology in through the back door since PCs were introduced in the 1980s, in a phenomenon known as shadow IT. Local area networks, email, the internet, software-as-a-service and smartphones were all in widespread use before IT organizations instituted policies around them.
Appfire’s Kersten: “If [a large language model] doesn’t know the answer, it might lie to you or make stuff up.” Photo: LinkedIn
But AI is a bit different. Unlike productivity software and the internet, which primarily manipulate and store data, AI models can incorporate information users provide into their training data. That creates the risk of inadvertent data disclosure, privacy risks and security vulnerabilities. Software developers who use AI to improve the quality of their code may unintentionally feed a company’s intellectual property into the training model, which is what happened at Samsung Electronics Co. Ltd. two years ago.
Like a human
People also interact with AI differently than traditional IT services. Generative AI’s humanlike qualities are more likely to entice users into relationships or influence their behavior. Unlike traditional software, which reliably delivers the same outputs in response to the same inputs, generative AI tools are prone to creative excursions, hallucinations and errors.
“If it doesn’t know the answer, it might lie to you or make stuff up,” said Doug Kersten, chief information security officer at Appfire Technologies LLC, a maker of group productivity software. “Shadow IT in the past was someone doing accounting on Excel and you didn’t know about it. With AI, there are legal, governance and regulatory issues.”
Awareness is building of the need for companies to put guardrails in place. Some 12,000 people have gone the International Association of Privacy Professionals’ AI governance training program, said Ashley Casovan, managing director at the association’s AI Governance Center. “It’s the largest adoption of an AI governance program to date,” she said.
Opaque policies
Generative AI has washed onto the scene so quickly that many organizations have been unable to formulate or enforce clear policies on their use. A recent survey by Unily Group Ltd. found that only 14% of employees believe their organization’s policy on AI use is “very clear,” and 39% said they find it easy to use unapproved tools. One in five said they definitely or possibly have entered sensitive information into an unsanctioned AI tool.

Reality Defender’s Shahriyari: Commercial LLM providers “may not train on the data, but they still have it.” Photo: Reality Defender
IT organizations face a conundrum in regulating AI use. Though web-based services like OpenAI LLC’s ChatGPT and Anthropic PBC’s Claude are easy to monitor and block, most generative AI models are now embedded in commercial software as copilots or assistants. WitnessAI Inc., the maker of software that regulates AI use in enterprises, has compiled a database of about 3,000 such applications. By some estimates, the number may be as high as 12,000, with hundreds more added each month.
“They’re expanding and enhancing so quickly that even when you think you have it under control, you find you don’t,” said Appfire’s Kersten.
The largest developers of commercial large language models have become more transparent over time about their training and data harvesting practices, with most pledging not to incorporate user-supplied data into their training models. “I think we can trust the mainstream ones,” said Ali Shahriyari, co-founder and chief technology officer at deepfake detection firm Reality Defender Inc.
Who’s got the data?
But those firms aren’t safe from data breaches, and their policies for retaining user-supply data are often murky, even if they don’t use it to train their models. “They may not train on the data, but they still have it,” Shahriyari said. “Who knows if they’re saving it or not?”
The bigger concern is the hundreds of small firms that have popped up to offer generative AI-based services that produce videos, aid in marketing campaigns, generate resumes and automate other specialty tasks. Their data usage policies may be unclear or users don’t bother to read them.
“If you’re not paying for a service, then you’re probably the service,” said Casey Corcoran, chief information security officer at Stratascale, the business name of SHI International Corp.

SAS’s Upchurch: “We don’t like to be in the business of blocking. My thinking is to kill with kindness.” Photo: News
Organizations are pursuing a variety of measures ranging from banning AI use outright to providing training programs and hoping employees make good decisions. At the very least, “organizations need to start with a clear acceptable use policy embedded in employee handbooks and signed by employees and contractors,” said Morey Haber, chief security adviser at identity and access security firm BeyondTrust Corp. “Next, IT departments need to establish technology controls for monitoring and usage of AI on corporate systems, including blocking AI domains and content and allowing approved sites.”
Many companies banned the use of ChatGPT outright in generative AI’s early days, and some, particularly government agencies and companies in regulated industries, still do. Previous shadow IT experience, though, has shown that denying employees access to productivity-enhancing technology can do more harm than good.
“Blocking everything will cause users to rebel,” said Colin McCarthy, director of digital transformation at cloud managed service provider Promevo LLC.
Essential tools
The Unily survey found that 54% of enterprise knowledge workers said they would leave their job if their management took away what they consider to be essential software tools. “You’re going to lose talent,” Smith said. “They’re either going to use shadow AI or go somewhere else.”
A more common practice at companies with highly educated and tech-savvy workforces is the one adopted by SAS Institute Inc. The software analytics company established a data ethics practice in 2021 and has an oversight committee that manages policies.
The company maintains a list of approved tools and promises to quickly adjudicate employee requests to add new ones. SAS blocks a handful of generative AI services, most notably DeepSeek, but encourages experimentation.
“We don’t like to be in the business of blocking,” said Chief Information Officer Jay Upchurch. “We try to chaperone people without slowing them down. My thinking is to kill with kindness.”
SAS has embraced a disciplined approach to adopting new tools. When Microsoft Corp.’s Copilot debuted, the IT organization recruited 300 volunteer users to test and comment on their experiences with the digital assistant. The reported average productivity improvements of about 40 minutes per week.
“That created viral interest,” Upchurch said. “We had a lot of success and demand, so we went to 600 users in the first few months.”
Reality Defender similarly trusts its employees to follow guidelines with a “strict no-tolerance policy” for tools that don’t meet standards. Shahriyari said. “We’ve thus far not had to take any drastic action as our team is aligned with the use of AI tools within given parameters,” he said.
Appfire set up a steering committee in the early days of generative AI because, Kersten said, “if you don’t you will immediately see everybody using different forms of AI in ways that you don’t expect, trying to build it into products in ways that create liability.”
The firm blocks a few sites deemed to be high risk and took out an enterprise ChatGPT license for internal use. Otherwise, Kersten said, “we rely a lot on policy and communicating that if you want to purchase anything that’s AI-related, you need to go through a process to get it approved.”
Tools can help
Startups and associations are stepping into the breach. The IAPP runs an AI governance certification program and is hosting conferences on the topic in May and September.

WitnessAI’s Spencer: Most CISOs are “interested in just knowing what users are doing.” Photo: WitnessAI
Practitioners said a mix of software tools for data loss protection, cloud access control and web filtering can help with monitoring shadow AI activity. Dedicated AI governance platforms are also emerging, including Credo.AI Corp., SurePath AI Inc., Guardrails AI Inc. and Liminal AI Inc.
WitnessAI tracks employee access to internal and external AI applications and creates a catalog of all activity, including prompts and responses. It classifies conversations for risk and intention and applies policies that map to corporate acceptable use practices. It also claims to prevent prompt injections and ensure approved AI applications behave appropriately.
Chief Technology Officer Gil Spencer said chief information security officers are less interested in blocking AI applications than understanding how they’re used. “When we started, we thought people were worried about AI attacks,” he said. “What we found is CISOs were most interested in just knowing what users were doing.”
Ultimately, shadow AI is likely to take the same course as shadow technologies that came before it. Given that Gartner Inc. projects that 80% of independent software vendors will have embedded generative AI capabilities in their enterprise applications by 2026, it’s too late to close the floodgates.
“The initial risks of shadow AI are real, but its value will ultimately outweigh them, leading to structured adoption, not rejection,” said Douglas McKee, executive director of threat research at SonicWall Inc.
Smith agreed. “The people who understood how impactful AI is educated themselves,” she said of colleagues at the retail chain. “They will use it because they want to be more productive.”
Image: News/DALL-E
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU