An unusual move shook the tech world this week. Twelve of the most influential companies on the planet — including Apple, Google, Microsoft, and Amazon — took part in an urgent meeting convened by Anthropic. The reason was not a product launch or a strategic partnership, but something far more sensitive: an artificial intelligence model with the potential to reshape the global cybersecurity landscape.
The initiative, introduced under the name “Project Glasswing,” brings together major tech firms and cybersecurity organizations. Officially, it focuses on collaboration, investment, and collective defense. However, a key detail stands out: the model that prompted this coalition was considered so risky that Anthropic chose not to release it publicly.
An unexpected — and concerning — breakthrough

The system, internally referred to as “Claude Mythos Preview,” was not designed to hack systems. That is precisely what makes it concerning. Its capabilities emerged from general improvements in coding, reasoning, and autonomy — not from specific training in cyberattacks.
In practical terms, what makes this model highly effective at writing code also makes it exceptionally powerful at identifying vulnerabilities.
During internal testing, the system detected critical flaws across multiple technological environments, including operating systems, web browsers, and widely used software. These were not isolated cases but widespread vulnerabilities affecting core elements of modern digital infrastructure.
Decades-old vulnerabilities uncovered
Among the most striking findings was a flaw in OpenBSD, an operating system known for its strong security standards. The vulnerability had remained undiscovered for 27 years and could allow a remote attacker to crash the system with minimal effort.
In another case, the model identified a weakness in FFmpeg — a key video processing library — that had gone unnoticed for 16 years, despite being reviewed countless times by automated tools.
Even within the Linux kernel, which underpins a large portion of global servers, the AI was able to chain multiple vulnerabilities to escalate privileges and gain full system control.
Most notably, these results required minimal human intervention. A simple prompt was enough for the system to carry out the full process of identifying and exploiting vulnerabilities.
The real shift: who can launch attacks
Beyond the technical discoveries, the most significant implication is the democratization of cyberattacks.
Engineers without cybersecurity expertise were able to generate functional exploits within hours using the model. This represents a fundamental shift. For decades, advanced cyberattacks required specialized knowledge and experience, creating a natural barrier.
That barrier is now eroding.
With tools like this, tasks that once took weeks of expert work could potentially be completed in a matter of hours by individuals with limited technical background.
A coalition racing against time
Faced with this scenario, Anthropic decided against releasing the model and instead initiated a coordinated response. This led to the creation of Project Glasswing, backed by a $100 million investment aimed at strengthening cybersecurity, supporting open-source projects, and reinforcing global defenses.
The goal is not only to fix the vulnerabilities already discovered, but to prepare for what comes next.
Participating companies recognize that these capabilities are unlikely to remain restricted for long. Other actors — including corporations, governments, or malicious groups — could develop similar tools in the near future.
In that context, the priority is clear: reinforce global digital infrastructure before such technologies become widely accessible.
A turning point for AI and security
This case marks a profound shift in the relationship between artificial intelligence and cybersecurity. It is no longer just about protecting existing systems, but about adapting to a reality where the speed of vulnerability discovery may outpace the ability to respond.
The technical team behind the model acknowledges that this is not an endpoint, but the beginning of a new phase. As AI capabilities continue to evolve, so will the associated risks.
The gathering of these twelve companies was not merely a collaborative gesture. It was, in essence, a warning.
The message is clear: the race has already begun, and the window to prepare is limited.

