On May 19, 2023, a photograph appeared on what was then still called Twitter showing smoke billowing from the Pentagon after an apparent explosion. The image quickly went viral. Within minutes, the S&P 500 dropped sharply, wiping out billions of dollars in market value. Then the truth emerged: the image was a fake, generated by AI.
The markets recovered as quickly as they had tumbled, but the event marked an important turning point: this was the first time that the stock market had been directly affected by a deepfake. It is highly unlikely to be the last. Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032.
Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage. In a Deloitte poll published in 2024, one in four executives reported that their companies had been hit by deepfake incidents that targeted financial and accounting data. Lawmakers are beginning to take notice of this growing threat. On October 13, 2025, California’s Governor Gavin Newsom signed the California AI Transparency Act into law. When it was first introduced in 2024, the Act required large “frontier providers”—companies like OpenAI, Anthropic, Microsoft, Google, and X—to implement tools that made it easier for users to identify AI-generated content. This requirement has now been extended to “large online platforms”—which essentially means social media platforms—and to producers of devices that capture content.
Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.
Learn More
Such legislation is important, necessary, and long overdue. But it is very far from being enough. The potential business impact of deepfakes extends far beyond what any single piece of legislation can address. If business leaders are to address these impacts, they must be alert to the danger, understand it, and take steps to limit the risks to their organizations.
How deepfakes threaten business
Here are three important and interrelated ways in which deepfakes can damage businesses:
1. Direct Attacks
The primary vector for direct attacks is targeted impersonations that are designed to extract money or information. Attacks like this can cause even sophisticated operators to lose millions of dollars. For instance, UK engineering giant Arup lost HK$200 million (about $25 million) last year after scammers used AI-generated clones of senior executives to order money transfers. The Hong Kong police, who described the theft as one of the world’s largest deepfake scams, confirmed that fake voices and images were used in videoconferencing software to deceive an employee into making 15 transfers to multiple bank accounts outside the business.
A few months later, WPP, the world’s largest advertising company, faced a similar threat when fraudsters cloned the voice and likeness of CEO Mark Read and tried to solicit money and sensitive information from colleagues. The attempt failed, but the company confirmed that a convincing deepfake of its leader was used in the scam.
The ability to create digital stand-ins that can speak and act in a convincing way is still in its infancy, yet the capabilities available to fraudsters are already extremely powerful. Soon, it will be impossible in most cases for humans to tell that they are interacting with a deepfake solely on the basis of audible or visual cues.
2. Rising Costs of Verification
Even organizations that are never directly targeted still end up paying for the fallout. Every deepfake that circulates—whether it’s a fake CEO, a fabricated news event, or a counterfeit ad—raises the collective cost of doing business. The result is a growing burden of verification that every company must now shoulder simply to prove that its communications are real and its actions authentic.
Firms are already tightening internal security protocols in response to these threats. Gartner suggests that by 2026 around 30% of enterprises that rely on facial recognition security tools will look for alternative solutions as these forms of protection are rendered unreliable by AI-generated deepfakes. Replacing these tools with less vulnerable alternatives will require considerable investment.
Each additional verification layer—watermarks, biometric tools for detecting that an individual is a live human being, chain-of-custody logs, forensic review—adds costs, slows down decision-making, and complicates workflows. And these costs will only continue to mount as deepfake tools become more sophisticated.
3.The Trust Tax
In addition to the direct costs that accrue from countering deepfake security threats, the simple possibility that someone may use this technology erodes trust across all relationships that are grounded in digital media. And given that virtually all business relationships now rely on some form of digital communication, this means that deepfakes have the potential to erode trust across virtually all commercial relationships.
To give just one example, phone and video calls are some of the most basic and most frequent tools used in modern business communications. But if you cannot be sure that the person on the screen or on the other end of the phone is who they claim to be, then how can you trust anything they say? And if you are constantly operating in a realm of uncertainty about the trustworthiness of your communication channels, how can you work productively?
If we begin to mistrust something as basic as our daily modes of communication, the result will eventually be a broad, ambient skepticism that seeps into every relationship, both within and beyond our workplaces. This kind of doubt undermines operational efficiency, adds layers of complexity to dealmaking, and increases friction in any task that involves remote communication. This is the “trust tax”—the cost of doing business in a world where anything might be fake.
Four steps that companies need to take
Here are four steps all business leaders should be taking to respond to the threat of deepfakes:
1. Verify what matters
Use cryptographic signatures for official statements, watermark executive videos, and communication channels, and use provenance tags for sensitive content. Don’t try to secure everything—focus your verification efforts where falsehoods would hurt the most.
2. Build a “source of truth” hub
Create a public verification page listing your official channels, press contacts, and authentication methods—stakeholders should know exactly where to go to confirm what’s real. If your organization relies on external information sources for rapid decision-making, ensure that these are only accessed through similarly authenticated hubs.
3. Train for the deepfake age
Run deepfake-awareness drills and build verification literacy into onboarding, media training, and client communication.
4. Treat detection tools as essential infrastructure
Invest in tools that can flag manipulated media in real time and then integrate these solutions into key workflows—finance approvals, HR interviews, investor communications. In the age of deepfakes, verification is a core operating capability.
From threat to opportunity
Social media echo chambers, conspiracy theories, and “alternative facts” have been fracturing our shared sense of reality for over a decade. The rise of AI-generated content will make this unraveling of common reference points exponentially worse. An earlier generation of internet users used to say, “Pics or it didn’t happen.” Well, now we can have all the pics we like, but how are we to tell if what they show happened at all?
Business leaders cannot solve the fragmentation of perceived reality or the fracturing of communities. They cannot single-handedly restore trust in institutions or reverse the cultural forces driving this crisis. But they can anchor their own organizations’ behavior and communications in verifiable truth, and they can build systems that increase trust.
Leaders who swim against the stream in this way will not only help protect their organizations from the dangers of deepfakes. When seeing is no longer believing, these businesses will also become the beacons that people rely on to navigate through an increasingly uncertain world.
Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.
Learn More
The early-rate deadline for Fast Company’s World Changing Ideas Awards is Friday, November 14, at 11:59 pm PT. Apply today.
