Cyberpower lies in calm
The problem is that Mythos is not the only security-focused AI model. The British “AI Safety Institute” recently published a surprisingly little-noticed analysis of OpenAI’s Claude Mythos competitor GPT 5.5 Cyber. In initial tests by security experts, this proved to be as powerful as Claude Mythos. At the same time, more and more security gaps are becoming known, which were identified by AI models and have sometimes remained unnoticed for years. For example, “Copy Fail,” a serious Linux security hole, was discovered that allows root access to all distributions released since 2017. This vulnerability was also identified as part of an AI-supported process – but not with the help of Claude Mythos.
So yes – AI models undoubtedly represent the biggest cybersecurity challenge yet. Even if they do not invent new classes of vulnerabilities – the scale and speed are at an unprecedented level. However, it is important not to just focus on a single, hyped AI model such as Claude Mythos. Because that’s doing yourself a disservice, to say the least. When I recently took part in a digital press briefing, I realized that I too had succumbed to this hype. The tempting title of the event: “What do Mythos Preview and Project Glasswing mean for Sweden?”
However, it turned out to be a low-key event that offered a general overview of the situation, fairly standard advice on cyber hygiene and the launch of a new cooperation initiative between authorities and companies in the cyber sector. My immediate reaction: I was a little disappointed and frustrated. I wondered where the urgency was – and the demands. But then it became clear to me: we have to deal with current developments in exactly the same way as at this event. Through Cooperation, Best Practices as well as clear advice and Routines. This involves very fundamental questions such as:
- What does cybersecurity look like when criminal hackers suddenly find and exploit security holes at 100x the speed?
- How can AI models help secure companies?
- How can the time between discovery and patching of vulnerabilities be reduced?
Furthermore, it is clear that authorities worldwide are increasingly recognizing the need to review AI models before they are published. This seems reasonable given the risks associated with the technology. So perhaps at some point we will actually get into a situation where specific AI models are classified as too dangerous to be released. But then we can at least be sure that this is not due to marketing efforts. (fm)
This article was originally published by our sister publication Computersweden.se.
