Google Cloud recently announced AI Protection, a comprehensive solution to protect against risks and threats associated with generative AI.
The company states that AI Protection helps teams effectively manage AI risk by discovering and assessing their AI inventory for potential vulnerabilities. Furthermore, it enhances security by implementing controls and policies to protect AI assets. Finally, it enables proactive threat management through robust detection and response capabilities, ensuring a comprehensive approach to managing risks associated with AI systems.
The solution integrates with Google’s Security Command Center (SCC), which now gives users a centralized view of their IT posture and manages AI risks in the context of other cloud risks.
(Source: Google Cloud News blog post)
The company brings the solution to customers as it identified as one of the key findings in a CyberSecuritry Forecast report:
Attacker Use of Artificial Intelligence (AI): Threat actors will increasingly use AI for sophisticated phishing, vishing, and social engineering attacks. They will also leverage deepfakes for identity theft, fraud, and bypassing security measures.
In addition, Mahmoud Rabie, a principal solutions consultant, posted on LinkedIn why AI Protection matters:
AI models are increasingly deployed in critical applications, making them attractive targets for cyber threats. Security risks such as data poisoning, adversarial attacks, and model leakage pose significant challenges.
The company sees that effective AI risk management involves understanding AI assets and their relationships and identifying and protecting sensitive data within AI applications. AI Protection tools automate data discovery and use virtual red teaming to detect vulnerabilities, offering recommendations for remediation to enhance security posture.
Next to understanding AI assets is protecting them. To protect AI assets, AI Protection utilizes Model Armor, a fully-managed service that enhances the security and safety of AI applications by screening prompts and responses for security and safety risks.
(Source: Google Cloud News blog post)
In a Medium blog post on Model Armor, Sasha Heyer concluded:
Model Armor is a great offering to enhance the security of your Gen AI applications. Helping to prevent prompt injection, data leaks, and malicious content. However, it lacks direct integration with the existing Vertex AI Gen AI stack, meaning developers must manually integrate it into their workflows.
Lastly, AI Protection leverages advanced security intelligence and research from Google and Mandiant to effectively safeguard customers’ AI systems. The Security Command Center’s detectors play a critical role in identifying initial access attempts, privilege escalation, and persistence efforts related to AI workloads. Committed to staying at the forefront of security challenges, the company will soon introduce new detectors to AI Protection, designed to recognize and manage runtime threats, including foundational model hijacking.