The sudden explosion of generative artificial intelligence has created immense opportunities for companies and public sector organizations of all sizes – but with such opportunity comes increased risk.
The risks inherent in gen AI, in fact, have stopped many enterprise gen AI initiatives dead in their tracks. The knee-jerk reaction for many executives is to shut gen AI down entirely – block access to public large language models at the firewall and implement “no gen AI” policies across the board.
This overreaction to the risks of genAI is problematic for two reasons: First, it prevents organizations from building successful gen AI strategies. Second, it simply doesn’t work. Employees will find a way around such limitations, perhaps by using their phones or accessing gen AI from home – a repeat of the familiar “bring your own device” problem we now call “BYO LLM.”
The way out of this conundrum is straightforward: Implement AI governance – not to slow down innovation, but rather to remove roadblocks to adoption of gen AI in ways that are safe, legal and compliant with corporate policies.
The complex gen AI governance landscape
Given the multifaceted risks inherent in gen AI use, from bias in business decision making to exposure of sensitive information, it’s no surprise that the software vendor community smells blood in the water.
Existing governance tooling – from old-school governance, risk and compliance or GRC offerings to more modern cloud governance tools – all fall short. It’s no wonder that numerous vendors of all sizes are jumping into the gen AI governance market with a variety of offerings.
Getting a handle on this nascent market is especially challenging, because the vendors throwing their respective hats into the gen AI ring are delivering offerings that are often quite different from one another. Which approach each vendor takes depends on which risks it focuses on. To understand the gen AI governance space, therefore, it’s important to understand the risks inherent in gen AI.
Each risk category thus becomes a starting point for each vendor as it builds out a differentiated offering. Here are the most common starting points:
Regulation-first approach
Over the last two years, legislative bodies around the world have rushed AI-centric regulations into effect. Regulated companies and government agencies face a bewildering tangle of such regulations, depending on where they do business and the nature of their offerings.
At the center of this firestorm are discrimination and bias compliance challenges. Leveraging gen AI in the hiring process is one of its most important use cases, exposing organizations to legal and compliance risk.
Several vendors are implementing solutions that take a regulation-first approach to gen AI compliance – as well as other forms of AI, including machine learning.
SolasAI Inc. focuses on mitigating regulatory, legal and reputational risk in AI – machine learning in particular, but also gen AI. The SolasAI platform focuses on bias and discrimination, untangling some of the complexities inherent in such risks.
For example, Solas AI identifies proxy discrimination – for example, where the location a person shops can indicate their race. It also extends its detection past personally identifiable information to other drivers of discrimination, say, whether someone attended Harvard versus Howard University.
For SolasAI, the goal is to provide its customers with the least discriminatory alternative, given the fact that eliminating bias altogether is impossible.
Also taking a regulation-first approach is Holistic AI Ltd. As with SolasAI, Holistic covers machine learning as well as gen AI. The company focuses on managing reputational and operational risk as well as compliance with standards and regulations.
Holistic continually monitors shifting global AI regulations and can also discover AI use across an organization via a combination of documentation scanning and integration with corporate applications.
FairNow Inc. also tracks all relevant laws and regulations. The FairNow platform supports a governance workflow that inventories existing uses of AI, assesses relevant risks, and coordinates human governance activities like approvals.
FairNow also supports standards as well as configurable internal policies. The platform generates risk scores and recommendations for compliance teams to implement.
Also taking a regulation-first approach is Modulos AG, which focuses primarily on European regulations and well as the National Institute of Standards and Technology risk management framework in the U.S.
The Modulos platform enables businesses to implement responsible AI governance policies while streamlining compliance with changing AI-centric regulations. It is also one of the first AI governance vendors to offer agentic AI governance (see my previous article on AI agents).
Rounding out the list of regulation-first vendors is Credo AI Corp., which has amassed intelligence on potential AI risk factors by partnering with regulatory agencies, open-source projects and industry organizations such as NIST.
Employee-first approach
While regulatory compliance is mandatory, ensuring employees use gen AI properly is also a top priority for organizations seeking to leverage the technology. To this end, several vendors are implementing “guardrails” to ensure the proper use of gen AI by employees.
One vendor offering governance of gen AI consumption is Portal26 Inc. The Portal26 platform provides visibility into unauthorized use of gen AI in organizations, aka “shadow AI.”
It also helps organizations manage data security and associated risks, recognizing various sensitive data in gen AI prompts, including PII, application programming interface keys and contextual information like drug use. In addition, it provides auditability and forensics for employee use of gen AI as well as gen AI value analytics.
Portal26’s analysis of employee prompts can take several seconds, which means it can only run out of band, so as not to slow down gen AI queries. This approach contrasts with WitnessAI Inc., which runs as a proxy, intercepting every gen AI prompt query as well as each response in a fraction of a second.
As with Portal26, WitnessAI addresses shadow AI concerns and can uncover the intention behind prompts. The WitnessAI platform can redact sensitive information from prompts, and can reroute queries from public LLMs to private, internal models – or block queries altogether as a matter of policy.
The SurePath AI Inc. platform resembles WitnessAI in that it provides governance guardrails for both public and private LLMs. It can also enforce redaction of both queries and responses as well as classify the various intents of the employee.
Where SurePath stands out is its ability to enrich queries with enterprise data that the employee has access to based upon corporate policy, thus ensuring meaningful responses based upon the relevant business context.
Extending tooling for AI governance purposes
While regulation-first and employee-first are the primary approaches to gen AI governance, other vendors are extending existing product categories into the AI governance space.
For example, Private AI Inc. focuses on data loss prevention or DLP and can provide PII redaction for LLMs and compliance with data protection regulations. Private AI differentiates itself from other AI governance tools via its multimodal support, including images and voice recognition.
For example, it can identify spoken credit card or social security numbers on customer service calls, even when the speaker adds “umms” or repeated digits. Private AI can also detect logos and other image components that may qualify as sensitive information.
Gen AI governance is also adjacent to the burgeoning gen AI security market. One vendor offering solutions at the overlap of these two markets is Enkrypt AI Inc., which offers a gateway that secures access to gen AI models and applications, providing an inventory and configurable guardrails for those assets.
Enkrypt’s goal is to deliver an end-to-end gen AI governance and compliance solution, giving its customers the ability to manage compliance with specific regulations, thus enabling them to mitigate the risks associated with gen AI.
Governance and the shifting sands of gen AI
Traditional governance, risk and compliance tools work within a relatively static framework. Gen AI governance, in contrast, is remarkably dynamic.
Though regulations are always subject to change, the sheer volume of AI-related regulations and their rapid evolution is beyond the scope of most GRC solutions. Gen AI’s fundamentally open nature – the fact that anybody can create whatever prompts they like – also complicates the governance challenge.
Traditional firewalls as well as the alphabet soup of other related products, including web application firewalls or WAFs, cloud security posture management or CSPM solutions and DLP all fall short.
There’s no question, therefore, that gen AI governance is here to stay – even though just what someone means by “gen AI governance” can vary depending upon which risks are getting the most attention.
Jason Bloomberg is founder and managing director of Intellyx, which advises business leaders and technology vendors on their digital transformation strategies. He wrote this article for News. None of the organizations mentioned in this article is an Intellyx customer.
Image: News/Ideogram
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU