Every few years, the security industry declares a new #1 threat. Ransomware. Supply chain attacks. Unpatched software vulnerabilities. The headlines rotate. The conference keynotes shift. The budgets follow.
And yet, year after year, email sits at the top of every serious threat report. Email functions as an identity layer of the internet: the reset mechanism for accounts, the gateway to financial accounts/banking systems, and a major (often default) trust channel for both individuals and enterprises. When attackers compromise email, they do not just gain access; they inherit trust. That makes every downstream system, no matter how well secured, implicitly vulnerable.
Generative AI has made this significantly worse, and not in incremental ways. The industry is responding, but the defensive tools have not kept pace with the rate at which the threat is evolving.
The scale of the problem is not abstract. Between October 2013 and December 2023, Business Email Compromise alone accounted for $55.5 billion in reported losses. Roughly 9 out of 10 cyberattacks still begin with a phishing email. At the same time, the attack surface continues to expand: over 376 billion emails are sent every day, with an estimated 3.4 billion phishing messages embedded within that flow.
Why attackers keep choosing email
The answer is not that email is technically weak, though parts of it are. The answer is that email is universally trusted infrastructure with an enormous, consistently accessible attack surface and exploitation patterns that have only grown more sophisticated over time.
Consider what email gives an attacker that almost no other channel does:
What AI changed, and it is not what most people think
The common narrative is that AI made phishing emails better-written. That is true, but it is the least important change. Grammar checkers could already fix most of the linguistic tells that used to identify phishing. The real shift is more structural.
One data point captures the scale of it: the volume of phishing emails surged by an estimated 1,265% in the two years following ChatGPT’s release in November 2022. The tools did not just improve existing attacks; they collapsed the cost and skill floor for launching them.
What legacy defenses get wrong
The dominant model for email security for the past two decades has been layered filtering: blocklists, keyword rules, spam scores, sandbox detonation of attachments. These layers still have value. But they share a common limitation: they are reactive and signature-based.
A signature-based system can only catch what it has already seen. In a threat environment where attacks are generated fresh at scale by AI, that approach will remain structurally behind.
Email security has also been misframed as primarily a user education problem. Security awareness training tells people to be suspicious of unexpected requests, to verify sender identities, to avoid clicking links. These are reasonable habits. But they place the burden of defense on the party least equipped to act as a reliable security control: the end user, against adversaries who are specifically engineering attacks to overcome exactly these habits. This is not a sustainable security posture.
What actually works
The defenses that hold up against AI-powered email attacks share a common trait: they operate at the infrastructure layer, before the email reaches a human decision-maker. The question shifts from “can this user recognize a phishing email?” to “can this system detect anomalous behavior at the sending level before delivery?”
The policy dimension that technical defenses cannot solve alone
Technical defenses can only go so far when the underlying infrastructure has not caught up with the threat. The protocols, the incentives, the governance structures all lag behind. Email security is not purely a technology problem. It is increasingly a policy problem.
What a real defense posture looks like
The organizations holding the line against AI-powered email attacks do not share a single tool. They share a posture: security built into the infrastructure from the beginning, defenses operating at the behavioral and transport layers before any human decision is involved, and authentication enforced rather than merely supported.
The shift is as much cultural as technical. Email security has long been treated as a cost center. In an environment where a single successful attack can compromise a hospital network, disable a financial institution, or expose classified government communications, that framing no longer holds.
Email security infrastructure is national security infrastructure. How organizations staff it, fund it, and regulate it needs to reflect that.
The uncomfortable truth
Attackers keep choosing email because it keeps working. Not because defenders are incompetent; many are extraordinarily capable. But the structural conditions that make email a high-value attack channel have not meaningfully changed in thirty years.
AI did not create this problem. It accelerated it to the point where the old equilibrium no longer holds. Defenders could once approximately keep pace through reactive filtering and user training. That is no longer the case. The only way to close that gap is to move the defense upstream: to the infrastructure, to the protocols, to the policy frameworks that govern how email works.
Attackers are not winning because email is broken. They are winning because the system still treats trust as implicit. Until that changes, at the protocol, infrastructure, and policy level, email will remain the most reliable way to compromise everything built on top of it.
