Artificial intelligence (AI) programs can automate tedious activities, speed up research, and streamline communication, but they are only as good as the cues and intent behind them. While you’re busy figuring out clues to make ChatGPT more efficient, hackers are using AI to steal passwords and banking information.
Recently, the Google Threat Intelligence Group (GTIG) published a blog post detailing how malicious actors are abusing various AI programs, including Google’s own Gemini, to target individuals and then make off with crucial information or trick victims into giving it to them. According to GTIG, AI is being used for intellectual property theft, surveillance and creating new types of malware programs, leading the group to compile a list of “threat actors” that attempted to use Gemini for malicious means. Google stepped in and stopped these individuals, but still wanted to paint a picture of what the people were doing, as their coding errors could very well impact the future of cybersecurity.
Read more: ChatGPT has a built-in ‘hack’ that makes your prompts so much better
AI allows hackers to quickly find targets and adapt their approach
Artist’s rendering of a robotic hand enveloping a human hand – Evgeniyshkolenko/Getty Images
One of AI’s most terrifying powers is its ability to quickly scan the Internet for any information specified in a prompt. If an AI can speed up the search for parts to build a gaming PC, it could create lists of victims that hackers can use in future attacks. According to GTIG, AIs can quickly profile potential targets and tell bad actors everything from their industry to their role and where they sit in an organization. This gives hackers a plan of attack faster than old-fashioned reconnaissance and can suggest possibilities they wouldn’t normally consider. An example of this was the hacker “UNC6418”, who used Gemini to track sensitive information about members of the Ukrainian defense sector for a phishing attempt.
Another way AI can be abused is to make scam messages sound more convincing. After an AI draws up a list of potential targets, malicious actors can use the programs to generate content for use in phishing attacks. You can normally tell a phishing attempt from a legitimate email with telltale signs like grammar and spelling mistakes, but AIs create phishing emails that look much more legitimate. Even worse, according to GTIG, AI programs can mimic human communication while talking to targets, building a level of trust among their potential victims.
The hacker “UNC2970” (who had ties to the North Korean government) used AIs to attack cybersecurity experts and pose as recruiters. One phishing kit that GTIG uncovered was COINBAIT, which allowed cryptocurrency investors to phish for login credentials. According to the organization, COINBAIT is built on the public Lovable AI app. Imagine what could have happened if hackers had used a more powerful API.
AI is used to code malware
Artist’s depiction of a computer virus attacking a system – Mediaphotos/Getty Images
AIs have numerous coding tools designed to make programming easier, and while it’s not normally possible to get these software products to produce malicious code, hackers have discovered a loophole. According to GTIG, users can trick AI software by using “agentic AI capabilities”: fully autonomous AI systems that can create multi-step, complex tasks with minimal human interaction. Take the threat actor “UNC795,” who was caught trying to get Gemini to produce “an AI-integrated code audit capability.” It’s not clear what their end goal was, but the effort points to an interest in more autonomous, multi-step tools. However, this is just one example of all the programmers trying to use Gemini for evil and the ways in which they do it.
Many of the examples in GTIG’s report are intended as proofs of concept. They have not led to significant cyber attacks, but have nevertheless led to what the organization calls “new capabilities in malware families.”
Just look at HONESTCUE as an example. This is a malware example that GTIG discovered that acted as a backdoor Trojan designed to create a “multi-layered approach to obfuscation.” The malware’s secret sauce was how it functioned. Once downloaded, HONESTCUE would use Gemini to receive malicious code and download another piece of malware, all without leaving a trace of activity or payloads on the hard drive. Although HONESTCUE has not been linked to cyberterrorism, according to GTIG’s analysis, the program was developed by amateur coders, which raises the important question of what a skilled hacker could do with the Gemini API.
Enjoyed this article? Sign up for BGR’s free newsletter and add us as your favorite source for the latest in tech and entertainment, plus tips and advice you’ll actually use.
Read the original article about BGR.
