Experts Warn of Mythos Hacking Risks as Anthropic Debuts Glasswing
In Focus
- Anthropic says Claude Mythos Preview can an identify and exploit vulnerabilities
- Experts advise caution when interpreting Mythos vulnerability results
- 40 tech companies have been enlisted in Project Glasswing
Anthropic’s Mythos AI model debuted on April 7, 2026 as part of a new cybersecurity initiative, Project Glasswing. Claude Mythos AI Preview is designed to identify security flaws and vulnerabilities within software. Google, Apple, Microsoft, Amazon Web Services, and Nvidia are among the partners enlisted in the initial phase of Anthropic’s Glasswing Project.
Anthropic’s Plan to Build Defensive Cybersecurity Capabilities
Through Project Glasswing, Anthropic is looking to enhance defensive cybersecurity capabilities as AI-driven threats get sophisticated. Besides identifying software vulnerabilities, Claude Mythos Preview can assess how those weaknesses may be exploited.
Anthropic, which introduced an AI code review tool for automated bug detection last month, is positioning Claude Mythos Preview as a model for securing open-source software. By offering advanced AI tools, Anthropic will be helping companies to proactively identify and patch security vulnerabilities.
Project Glasswing is based on a frontier AI system featuring strong reasoning and coding capabilities. Anthropic said these capabilities enable the AI model to analyze high-risk vulnerabilities in web browsers, operating systems, and software. Initial testing has reportedly revealed numerous vulnerabilities. The company introduced an AI code review tool for automated bug detection last month.
Anthropic’s AI cybersecurity project rolls out days after the AI company suffered a source code leak that exposed Claude Code’s architecture and memory utilization. Last week, the company said it was assessing the widespread risks associated with the code base leak.
Experts Warn of New Hacking Wave
As Project Glasswing rolls out, experts are warning that Anthropic Mythos could create a new wave of hacking as systems with advanced reasoning identify and exploit more software. Anthropic has also claimed that Mythos Preview can identify and exploit vulnerabilities with unprecedented accuracy.
This presents a threat to software security across leading web browsers and operating systems. Anthropic researchers say the new model can detect thousands of defects and vulnerabilities in software, including web browsers and operating systems.
In the absence of detailed information, some security experts have advised caution when interpreting vulnerability results generated by the AI model. Many others have said Anthropic took significant caution when debuting the new model.
Positive Reception by Tech Companies
Tech companies have appreciated the opportunity to test the cybersecurity AI model under the Project Glasswing. Microsoft also commended the improvements of the Claude Mythos AI cybersecurity model. The company added that Anthropic’s new initiative reflects a shift in scaling cybersecurity.
“When tested against CTI-REALM, our open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models,” Global CISO and EVP of Security and Microsoft Research at Microsoft, Igor Tsyganskiy noted.
Amazon Web Services (AWS) has already started running the cybersecurity AI model in its systems internally. The company said the new model is already strengthening its code.
“We build defenses before threats emerge. AI is central to our ability to defend at scale. We’ve been testing Claude Mythos Preview in our own security operations where it’s already helping us strengthen our code,” VP and CISO at Amazon Web Services, Amy Herzog stated.
Google’s Call for Industry Collaboration
On its part, Google commended Anthropic’s collaborative approach to Project Glasswing. The tech giant also highlighted the need for industry-wide collaboration in addressing emerging security threats.
“Google is pleased to see this cross-industry cybersecurity initiative coming together. It’s always been critical that the industry work together on emerging security issues. We have long believed that AI poses new challenges and opens new opportunities in cyber defense,” VP of Security Engineering at Google, Heather Adkins noted.
Last year, Google DeepMind introduced an AI agent called CodeMender to fix security vulnerabilities in software. Anthropic is not making its cybersecurity model publicly available over concerns that it might be misused. Instead, the company is limiting access to selected 40 tech partners, which include CrowdStrike and Palo Alto Networks.
