Steve Wilson, Chief Product Officer, I will leave.
AI adoption has skyrocketed as organizations strive to harness machine learning (ML) and artificial intelligence (AI) to drive efficiency and innovation. However, with the rapid adoption of AI comes significant risks that many organizations are only beginning to fully comprehend.
Among the most pressing of these risks is the AI software supply chain—a complex network of open-source tools, proprietary software and cloud services that can be a breeding ground for vulnerabilities. Despite the immense potential AI offers, it is often built on foundations that lack the necessary scrutiny and security, putting organizations at considerable risk.
As more organizations adopt AI, the integrity of their AI supply chain becomes critical, and failing to address these risks could have devastating consequences—especially given that Verizon found a 68% increase in the volume of “supply chain interconnection” involved in breaches between 2023 and 2024.
Here’s a closer look at the current state of the AI supply chain and the steps companies can take to ensure their systems remain trustworthy and secure.
The Fragility Of AI’s Building Blocks
The AI software supply chain encompasses every element that contributes to the development, deployment and operation of AI systems. This includes the sourcing of raw data, model training, deployment of machine learning frameworks and continuous integration and delivery (CI/CD) pipelines. Many companies also rely on open-source models, third-party tools, libraries and datasets for AI development. While these elements provide the building blocks for AI systems, they also present significant risks.
Open Source: A Double-Edged Sword
While open-source software is a staple in the AI development community, it also represents one of the most significant weak points in the AI supply chain. Unlike traditional open-source ecosystems like GitHub, platforms such as Hugging Face, which hosts thousands of AI models, are still maturing in their security practices.
Research found a concerning number of maliciously poisoned models on these platforms, demonstrating how bad actors are exploiting gaps in the AI supply chain. These security vulnerabilities stem not from “lazy” open-source development but from an immature supply chain that bad actors are actively exploiting.
For businesses, relying on open-source components without proper vetting is like building a house on unstable foundations. While these tools offer quick access to powerful AI capabilities, they also increase the risk of introducing security flaws that are difficult to track or address.
Poisoned Data, Poisoned Outcomes
One of the biggest dangers in the AI supply chain is poisoned training data. AI models rely on massive datasets for training, but the more extensive the dataset, the harder it becomes to verify the accuracy and safety of the input data. If attackers manage to inject malicious or biased data into the training pipeline, they can poison the model, leading to incorrect or harmful outputs.
Poisoned data can train AI systems to make unsafe decisions, propagate bias or even execute security vulnerabilities. This kind of attack is difficult to detect, and the effects may only become apparent long after the model is deployed.
The Problem With Third-Party Dependencies And Libraries
Many businesses also depend on third-party vendors for their AI tools and services. While this provides immediate access to cutting-edge technologies, it can create a dependency that introduces additional security risks. AI-as-a-service (AIaaS) platforms may obscure key aspects of their supply chain, leaving customers in the dark about the security and reliability of the underlying technologies. Additionally, vendors may not disclose vulnerabilities in their systems, increasing the risk of breaches that can ripple through their clients’ operations.
What Companies Can Do To Address AI Supply Chain Risks
As AI becomes an integral part of more business operations, the integrity of the AI software supply chain must be a top priority. Companies that fail to secure their AI supply chains will expose themselves to significant threats, potentially compromising the very systems they rely on for innovation and competitive advantage.
Recognizing the risks within the AI software supply chain is the first step toward securing it. Here are some proactive strategies organizations can adopt to ensure their AI remains trustworthy and secure:
Implement rigorous model audits and monitoring.
AI systems should be continuously audited to ensure that both the models and the data they interact with remain secure. This involves not only testing models for vulnerabilities during initial deployment but also regularly scanning for new threats that may emerge over time.
Organizations should implement automated tools to monitor model behavior, identify deviations and ensure that the system isn’t compromised by unexpected inputs or poisoned data. This proactive approach allows companies to detect and address anomalies before they become critical security issues.
Secure the entire lifecycle of AI development.
AI security must be embedded at every stage of development, from initial research and model training to deployment and post-deployment updates. This includes ensuring that all third-party tools, datasets and software libraries used in the AI’s development are thoroughly vetted and continuously monitored for potential vulnerabilities.
Adopt a zero-trust approach to data and models.
In line with zero trust architecture principles, organizations should assume that no component of their AI supply chain—whether internal or external—can be fully trusted by default. This means validating every piece of data, code and model, ensuring that all elements within the supply chain are authenticated and verified. Regularly re-evaluating and updating AI systems in response to new threats is also critical.
Securing The Future Of AI
The AI software supply chain may currently be a dumpster fire, but organizations have the tools and strategies to extinguish the flames. By being proactive in addressing open-source vulnerabilities, securing their training data and adopting rigorous supply chain monitoring, businesses can work to ensure their AI systems remain trustworthy and secure, better protecting themselves from the growing threats facing the AI ecosystem.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?