Microsoft says it has created an advanced AI system that can reverse-engineer and identify malicious software on its own, without human assistance.
The prototype system, called Project Ire, automatically dissects software files to understand how they work, what they do, and whether they’re dangerous. This kind of deep analysis is typically performed by human security experts.
Long-term, Microsoft says it hopes the AI will detect new types of malware directly in computer memory, helping to stop threats faster and on a larger scale.
The system “automates what is considered the gold standard in malware classification: fully reverse engineering a software file without any clues about its origin or purpose,” the company said in a blog post announcing the prototype.
This differs from existing security tools that scan for known threats or match files to existing patterns. It comes as security defenders and hackers engage in an arms race to use emerging AI models and autonomous agents to their advantage.
More broadly, Microsoft has described security as its top priority, and put security deputies into all of its product teams through its Secure Future Initiative, following a series of high-profile vulnerabilities in its software that have led to questions and frustration from business and government leaders.
On Monday, the company launched its latest Zero Day Quest, a global bug bounty competition with up to $5 million in total rewards. The challenge invites security researchers to find vulnerabilities in Microsoft cloud and AI products, with the potential for bonus payouts in high-impact scenarios.
Announcing Project Ire on Tuesday morning, Microsoft said it was accurate enough in one case to justify automatically blocking a highly advanced form of malware. It was the first time that any system at the company — human or machine — had produced a threat report strong enough to trigger an automatic block on its own.
Microsoft says this kind of automation could eventually help protect billions of devices more effectively, as cyberattacks become more sophisticated.
Early testing showed the AI to be very accurate: when it determined a file was malicious, it was correct 98% of the time, and it incorrectly flagged safe files as threats in 2% of cases, according to the company.
It’s part of a growing wave of AI systems aimed at defusing cybersecurity threats in new ways. Google’s “Big Sleep” AI, for example, also operates autonomously but concentrates instead on discovering security vulnerabilities in code.
Project Ire was developed by teams across Microsoft Research, Microsoft Defender, and Microsoft Discovery & Quantum, and will now be used internally to help speed up threat detection across Microsoft’s security tools, according to the company.