Los Income related to generative AIeither as support or as objectives of the attack, They are growing worldwideaccording to the consultant Gartner. In addition, they are rising in all their modalities. 62% of the companies experienced an attack that used Deepfakes in the last 12 months, and 32% faced a cyber attack in AI applications that took advantage of their prompts throughout the last year. Another 29% experienced attacks on the infrastructure used for generative AI applications.
Attendees who use chatbots are vulnerable to various adversary prompt techniques. For example, to the generation of Prompts for manipulation of large language models or multimodal models to generate answers with malicious or biased content.
Deepfakes attacks experienced by companies also implied social engineering techniques, or the exploitation of automated processes. 30% experienced at least one attack through a video with Deepfakes used to skip biometric identity verification systems. Another 32% suffered attacks from audio deepfakes also to deceive biometric systems that operate by voice.
36% of respondents acknowledged having suffered a social engineering attack with a Deepfake during a video call with an employee, and 44% during an audio call with a worker.
These data leave a study conducted by the consultant between the past months of March and May among 302 responsible for cybersecurity that work in the regions of North America, Emea and Asia-Pacific. The study also indicates that 67% of those responsible for cybersecurity are aware that the risks of generative demands notable changes to current approaches used in cybersecurity.
According to Akif Khan, vice president of Gartner analysts«As its adoption accelerates, the attacks that take advantage of the generative AI for phishing, the Deepfakes and Social Engineering have become mainstream, while other threats, such as attacks on the infrastructure of generative applications and manipulations based on prompts are in an emerging phase and gaining traction«.
With respect to the strategy changes that are needed due to the increase in attacks related to generative AI, and the risks involved in this technology, Khan stressed that «Instead of making radical changes or making isolated investments, organizations should reinforce the main controls, and implement measures aimed at each new risk category«.