
AI Security Startup Irregular Raises $80 Million from Sequoia to Test OpenAI and Anthropic dels for Cyber Attack Capabilities
The $450 Million Question: Why Silicon Valley's Elite Are Betting Big on AI's Dark Side
On a Wednesday morning in September, as artificial intelligence companies raced to build ever more powerful models, a relatively unknown Israeli startup emerged from stealth with an $80 million war chest and a sobering mission: testing AI's capacity for harm before it reaches the world.
Irregular, formerly Pattern Labs, announced the substantial funding round led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. Sources close to the deal valued the company at approximately $450 million—a remarkable figure for a firm with fewer than three dozen employees that most industry observers had never heard of until this week.
Yet Irregular's fingerprints are already embedded in the most consequential AI systems of our time. The company's security evaluations appear prominently in OpenAI's system cards for its o3 and o4-mini models, while Anthropic credits their collaborative work in assessments of Claude 3.7 Sonnet. Their SOLVE framework for scoring AI vulnerability detection has become industry standard, quietly shaping how the world's most advanced AI systems are tested before release.
When Silicon Valley's Brightest Minds Worry About AI Warfare
The funding reflects a fundamental shift in how the technology industry views artificial intelligence risks. Where previous concerns centered on bias or misinformation, today's fears run deeper: AI systems capable of autonomous cyber attacks, sophisticated vulnerability discovery, and coordination between multiple AI agents in ways that could overwhelm traditional defenses.
"Soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that's going to break the security stack along multiple points," co-founder Dan Lahav explained to industry observers. His partner, CTO Omer Nevo, described their approach with military precision: "We have complex network simulations where we have AI both taking the role of attacker and defender. When a new model comes out, we can see where the defenses hold up and where they don't."
This is not theoretical speculation. OpenAI overhauled its internal security protocols this summer amid concerns about corporate espionage. Meanwhile, frontier AI models have demonstrated increasingly sophisticated abilities to identify software vulnerabilities—capabilities that serve both defensive and offensive purposes. Recent evaluations show these systems can plan sophisticated attacks, though they often fall short of execution without human assistance.
The emergence of "emergent behaviors"—unexpected capabilities that arise as AI systems scale—has created what security experts describe as a moving target problem. Models trained for benign purposes may spontaneously develop skills their creators never intended, from advanced reasoning about cyber warfare to autonomous coordination with other AI systems.
The Science of Digital Red Teams
Irregular's approach centers on elaborate simulated environments that mirror real-world network architectures. Unlike simple "jailbreaking" attempts that try to trick AI models into harmful responses, their testing involves complex, multi-step scenarios where AI agents must navigate realistic network defenses, escalate privileges, and accomplish specific objectives.
The company's SOLVE scoring system provides granular assessment of how AI models handle vulnerability discovery and exploitation tasks. This framework has gained traction across the industry, appearing in evaluations conducted by major AI laboratories and government agencies. The UK government and Anthropic both reference SOLVE in their security assessments, suggesting the framework may become a de facto standard for AI security evaluation.
What sets Irregular apart from traditional cybersecurity firms is their focus on emergent risks—threats that haven't yet manifested in the wild but could emerge as AI capabilities advance. Their simulated environments test not just current model capabilities, but potential future behaviors that might arise through continued training or interaction with other systems.
Industry analysts note that Irregular's early access to pre-release AI models from major laboratories provides a significant competitive advantage. This positioning allows them to identify security vulnerabilities before public deployment, potentially preventing real-world incidents.
Market Forces Driving the Security Arms Race
The AI security market is experiencing unprecedented growth, with spending projected to exceed $20 billion by 2028. This surge reflects both the expanding use of AI systems and growing awareness of associated risks. Recent incidents, including AI-generated deepfakes causing billions in fraud losses and concerns about AI-facilitated attacks on critical infrastructure, have elevated security from an afterthought to a deployment prerequisite.
Regulatory pressure is accelerating adoption of AI security solutions. The European Union's AI Act, which took effect in August 2025, requires comprehensive risk assessments for high-capability AI systems. Similar requirements are emerging across jurisdictions, creating demand for standardized evaluation frameworks and third-party security assessments.
The competitive landscape reveals a fragmented market ripe for consolidation. Major cybersecurity platforms have begun acquiring specialized AI security firms: Cisco purchased Robust Intelligence, Palo Alto Networks acquired Protect AI, and Check Point bought Lakera. These moves signal recognition that AI security requires specialized expertise beyond traditional cybersecurity approaches.
Irregular's positioning as both an evaluation provider and potential standard-setter puts them at the center of this consolidation wave. Their relationships with major AI laboratories and government agencies provide strategic value that extends beyond their current revenue, which sources describe as already reaching millions annually despite the company's recent emergence from stealth.
Investment Implications in an Uncertain Landscape
For institutional investors, Irregular represents a bet on the infrastructure layer of AI deployment rather than AI capabilities themselves. As AI systems become more powerful and ubiquitous, the security layer becomes increasingly critical—and valuable.
The company's $450 million valuation reflects strategic rather than purely financial considerations. With access to pre-release models from OpenAI, Anthropic, and Google DeepMind, Irregular occupies a unique position in the AI ecosystem. This access, combined with their growing influence on industry standards, creates potential for significant platform value.
Market dynamics favor companies that can provide comprehensive AI security solutions. The shift toward AI-on-AI interactions—where multiple AI systems coordinate autonomously—creates security challenges that traditional approaches cannot address. Irregular's focus on multi-agent simulations positions them well for this transition.
Risk factors include the possibility of major AI laboratories developing in-house security capabilities, reducing demand for external evaluation services. However, regulatory requirements for independent assessments and the complexity of advanced AI security testing suggest continued demand for specialized providers.
Forward-looking investors should monitor several key metrics: Irregular's expansion beyond evaluation into runtime security controls, adoption of their frameworks by regulatory bodies, and their ability to scale testing capabilities as AI systems become more sophisticated.
The Path Forward in an AI-First World
Irregular's emergence reflects a broader maturation of the AI industry, where security considerations increasingly drive deployment decisions. The company's rapid rise from stealth to near-unicorn valuation demonstrates investor recognition that AI security represents both necessity and opportunity.
The funding announcement arrives as the AI industry grapples with balancing innovation pace against safety considerations. Recent evaluations suggest that while current AI systems can assist with cyber attacks, they remain limited compared to skilled human operators. However, the trajectory points toward more capable and autonomous systems that could fundamentally alter the cybersecurity landscape.
As AI systems become more prevalent in critical infrastructure, financial services, and national security applications, the stakes for getting security right continue to rise. Companies like Irregular serve as crucial gatekeepers, testing the boundaries of AI capabilities before they reach deployment.
The $80 million investment in Irregular signals broader confidence that AI security will become a substantial market category. For investors seeking exposure to AI infrastructure rather than direct AI capabilities, companies providing security evaluation and runtime protection represent compelling opportunities in a rapidly evolving landscape.
Whether Irregular can translate its early evaluation success into platform dominance remains an open question. However, their positioning at the intersection of AI advancement and security necessity suggests they will play an influential role in shaping how society manages the risks and benefits of increasingly powerful artificial intelligence systems.