Deepfakes by AI have ceased to be a theoretical function and have become a “Solution” exploitable in the real world That it mine digital trust, exposes companies to new risks and drives the commercial business of cybercriminals, says a new Trend Micro report.
The convergence of artificial intelligence and cybersecurity represents one of the deepest strategic challenges those facing the technology industry. As the generative tools have become more powerful, affordable and accessible, cybercriminals adopt them more and more to support all kinds of attacks, from commercial fraud to extortion and the theft of identities, explains David Sancho, senior researcher of threats in Trend:
«The means generated by AI Not only do they represent a future risk, but a real threat to companies. We are witnessing cases of implantation of executive identity, committed contracting processes and financial safeguards evaded with an alarming ease. This research is a attention call: if companies are not proactively prepared for the era of Deepfakes, they are already delayed. In a world to see is not to believe, digital trust must be rebuilt from scratch ».
Deepfakes for AI, also a big business
The research reveals that threat actors no longer need experience in the clandestine field to launch convincing attacks. Instead, they use video generation, audio and images commercially available (many of which are marketed for content creators) To generate realistic deepfakes that are used to deceive both individuals and organizations. These tools are economical, easy to use and increasingly capable of avoiding identity verification systems and security controls.
There is also an expanding cybercriminal ecosystem where these platforms are used to execute convincing scams. As a result, known as ‘fraud by CEO’ It is increasingly difficult to detect, since attackers use Audio or Video Deepfake to supplant the identity of senior managers in real -time meetings.
Recruitment processes are also compromised by false candidates who use AI to approve interviews and obtain unauthorized access to internal systems. In addition, financial services companies are observing an increase in Deepfake attempts to avoid KYC verifications (know their client), which facilitates anonymous money laundering through the use of counterfeit credentials.
Trend Micro Instates companies to take proactive measures to minimize their risk exposure and protect their employees and processes. These measures include training staff about social engineering risks, reviewing authentication workflows and exploring synthetic media detection solutions.
Full report | Trend Micro