Artificial intelligence has infiltrated so many areas that it should not be surprising that it is also used for perverse uses. One of these ‘dark sides of AI’ It is located in cybersecurity. And of particular concern is the role of AI in cyberattacks, which allows increasingly advanced and cheap malicious campaigns, extending them to groups that until now did not have technical capacity.
The latest alert about the use of AI in cyberattacks comes from Microsoft Threat Intelligence, the security division of the software giant. Just as AI is used to speed up coding, writing, design and many other tasks, cybercriminals also use it to simplify their developments and increase their reach and danger.
AI in cyber attacks, increasing potential
Microsoft explains that “Threat actors have incorporated automation into their strategies, as reliable and cost-effective AI-based services reduce technical barriers and integrate capabilities directly into their workflows. “These capabilities reduce friction in reconnaissance, social engineering, malware development, and post-breach activity, allowing cybercriminals to act faster and refine their operations.”.
The example is given of ‘Jasper Sleet’, a malicious development that Leverage AI throughout the attack lifecycle to recruit staff, maintain their position, and abuse access on a large scale. The use of AI is so widespread that Microsoft says it is seen at all levels of cyberattacks:
“As malware authors integrate AI into their operations, they are not limited to intended or policy-compliant uses of these systems. Microsoft Threat Intelligence has observed that threat actors are actively experimenting with techniques to bypass or unlock AI security controls and obtain results that would otherwise be restricted. These strategies include reframing prompts, chaining instructions across multiple interactions, and misusing system or developer prompts to force models to generate malicious content..
An example of this would be the use of jailbreak techniques based on roles to bypass AI security controls. In these types of scenarios, developers could prompt models to assume trusted roles or assert that the threat actor operates in such a role, establishing a shared context of legitimacy.
AI is also used to support cyber attack infrastructure, such as automatic generation of websites and domains:
“Cybercriminals have leveraged generative adversarial network (GAN)-based techniques to automate the creation of domain names that closely resemble legitimate brands and services. By training models with large data sets from real domains, the generator learns common structural and lexical patterns, while a discriminator evaluates whether the results appear authentic. Through iterative refinement, this process produces compelling imitation domains, increasingly difficult to distinguish from legitimate infrastructure using static or pattern-based detection methods. This enables rapid creation and rotation of large-scale phishing domains, facilitating phishing, C2, and credential harvesting operations..
Microsoft also warns of emerging trends such as AI-enabled malware what “integrate or invoke models during execution instead of using AI only during development”. The Microsoft division also issues advice to mitigate these types of attacks. And they talk about mitigation because its elimination seems impossible. As the capabilities of artificial intelligence (especially generative) increase, so does its potential to create and distribute malware.
