JFrog today expanded its Software Supply Chain Platform with a new feature called Shadow AI Detection, designed to give enterprises visibility and control over the often-unmanaged AI models and API calls creeping into their development pipelines. The move aims to address the rising security, compliance, and risk exposure posed by “shadow AI” AI integrations adopted informally by teams without organizational oversight.
The newly introduced capability automatically scans and inventories all internal AI models and external API gateways used across an organization, including unsanctioned tools from providers like OpenAI, Anthropic, and other third-party services. From there, enterprises can implement centralized governance to enforce security and compliance policies, define authorized access paths, track usage, and maintain a full audit trail.
In the accompanying press release, JFrog’s VP and CTO of ML, Yuval Fernbach, framed the development as a response to growing blind spots in AI adoption, stating that Shadow AI Detection “strengthens JFrog’s leadership in securing the AI supply chain 360 degrees, helping companies utilize AI safely and responsibly.”
The timing is notable: as businesses increasingly embed AI into applications and workflows, often rapidly and without centralized policy, the risk of unmanaged, insecure, or non-compliant AI use grows. Shadow AI isn’t just about security; it can lead to regulatory, data-leak, and supply-chain vulnerabilities. JFrog argues that governance mechanisms mirroring those used for software packages and dependencies must now be extended to AI models and AI-driven interactions.
With the new capability, JFrog positions its platform as more than a traditional artifact repository; it becomes a single system of record for an organization’s software and AI supply chain. Organizations enrolling in the feature will be better equipped to enforce compliance with global AI-related regulations such as the forthcoming EU AI Act, the US’s evolving frontier-AI transparency rules, and emerging guidelines under NIS2 and other cyber-resilience frameworks.
JFrog is not alone in exploring the concept of using AI in this space. ModelOp Center is designed as an “AI control tower”, providing lifecycle management and governance for all AI within an organization (in-house models, third-party vendor models, generative-AI solutions, and more). It supports registration of new AI use cases, risk assessment, policy enforcement, audit trails, and continuous monitoring. Unlike typical MLOps or data platforms, which focus on model training, deployment, or data pipelines, ModelOp explicitly targets governance, compliance, and enterprise-wide oversight.
Aurva is another security-focused platform that provides real-time monitoring and observability for AI/ML systems, including agentic workloads and API-based AI model calls. According to the vendor, AIOStack gives “deep, kernel-level visibility and control,” helping detect unauthorized data access, potential data leakages, and suspicious behaviour by AI agents. Aurva markets itself as a tool for “shadow-AI visibility,” enabling organizations to discover unmanaged or unsanctioned AI usage in their environment, much like what JFrog aims to do with its Shadow AI Detection.
Shadow AI Detection will roll out as part of the existing JFrog AI Catalog, with general availability expected in 2025.
