Identity theft is one of the most dangerous fraudulent practices in the digital world, since impersonation and theft of accounts give rise to all types of computer attacks. The capabilities of AI and specifically the automation of processes, allow – without intending to – facilitate this type of identity theft as we are going to see in this special.
There are many technological specialists who have warned of the ‘dark side of AI’ in reference to cybersecuritywith serious warnings of the impact that artificial intelligence tools are having on global cybersecurity, intensifying the threats that come from the most dangerous attacks such as ransomware or phishing and in general as help in the generation of malware.
As you already know, automations powered by artificial intelligence are transforming the business landscape. They offer businesses the ability to connect with, guide and serve customers more efficiently, resulting in streamlined processes and lower maintenance costs.
However, AI-powered automations also have their critical point in security. The same capabilities that exist to improve legitimate operations can also be used by cybercriminals seeking to steal identities. The rise of low-cost AI and its use in these automations has allowed fraudsters to expand their networks and increase their effectiveness, leading to a drastic increase of identity theft scenarios.
Identity theft through phishing attacks
AI capabilities are increasingly used in today’s business world for process automation. For example, AI can automate data collection and analysis to improve marketing initiatives. Criminals carrying out identity theft schemes can use the same type of business processes to collect and analyze data on potential targets.
With automated phishing, for example, AI can crawl the web for details about targets and then use them to construct more credible phishing messages. The content of these messages has a higher degree of relevance and authenticitywhich makes them more effective.
AI automations also allow criminals Identify targets and prepare phishing messages fastermeaning they can implement more attacks. They even provide the ability to attack a target in real time when an event occurs that could increase the target’s vulnerability.
For example, after a natural disaster, as we have recently seen with Valencia DANA, criminals could use AI automations to launch a targeted attack on communities affected by the disaster with fraudulent offers of help or similar.
AI can also automate the learning process needed to increase the effectiveness of phishing and other attacks. You can analyze data about attacks, determine which are the most effective and modify them so that they take the path of least resistance.
Deepfakes to support identity theft
Identity theft is often successful when a cybercriminal impersonates someone the victim trusts. The plan may involve posing as a representative of a financial institution, a law enforcement officer, or a loved one. In each case, the victim shares personal information once a basis of trust is accepted.
Artificial intelligence provides criminals with powerful tools to assume a false identity and gain the victim’s trust. By empowering creations DeepfakeAI enables criminals create more realistic audio or video content to impersonate trusted people. AI can also power interactions with chatbots, such as text message exchanges, that convincingly mimic the communication patterns of a trusted person.
The explosion of AI assistants at the client level has allowed the arrival of malicious developments as dangerous as WormGPT, a ChatGPT-type chatbot, but designed specifically to make it easier for cybercriminals to carry out their “tasks.” They have also begun to distribute AI-generated voice deepfakes to gain access to financial accounts.
Instead of using artificial intelligence to trick people into providing passwords, voice deepfakes directly target organizations such as financial institutions to speed up the process, posing as customers to gain access to accounts. The large amount of audio and video content that users provide on social media channels makes these types of AI-backed scams possible.
Voice spoofs also allow criminals to expand the reach of their identity theft operations, as they are no longer limited to implementing schemes in regions where they can understand the language. AI can translate schematics into different languages and use natural language processing to understand targets’ responses.
Using automation to maximize fraud
AI also comes into play when fraudsters obtain personal data through an identity theft scheme. With AI and automation, criminals can act faster to use personal information once they obtain it.
If the stolen information includes social security numbers, they can use it to quickly apply for multiple credit cards. If you include credit card numbers, they can quickly deplete financial accounts. Reports show that identity fraud cost a whopping $43 billion in 2023 in the United States alone. The resulting amount of account takeovers was almost 13 billion and new account fraud represented more than 5 billion.
How to prevent identity theft?
Attention and personal prudence, along with the adoption of the best security practices, are essential against identity theft and account hijackings that have become a real scourge. Techniques such as password managers, two-factor authentication, and manual service audits can help provide a higher security threshold to protect identities.
However, personal vigilance is not enough to effectively repel attacks. To achieve success, comprehensive strategies are required that also involve companies that hold confidential data. People need to be more informed about identity theft and Companies must invest in stronger systems to maintain data security.
Greater cooperation between regulators, law enforcement and technology companies could also help prevent identity theft. Currently, there is a gap that cybercriminals can take advantage of to successfully launch new tools and thus overcome the defensive measures that are being implemented. Holding companies accountable for failing to prevent data breaches (due to lack of security) is another issue that is also in the debate.
All measures will be welcome because the capabilities of AI do not stop growing and its use has drastically increased the volume of identity fraud cases. Experts suggest that a new case occurs every 22 seconds. AI has also managed to increase the “quality” of attacks and make them more difficult to detect.