Introducing Frontier AI Security
September 17, 2025
Irregular (formerly Pattern Labs) is the first frontier security lab with the mission of protecting the world in the time of increasingly capable and sophisticated AI systems.
We’ve witnessed AI advancing faster than anyone expected. The same breakthroughs that unlock new possibilities in medicine, science and software also reveal capabilities that can be misused: systems that can discover exploits, evade defenses, or even execute autonomous cyber operations. These are not hypothetical risks; they are the natural byproduct of pushing intelligence to its limits.
Irregular exists to meet this moment. We’ve developed a sophisticated research platform to run controlled simulations on frontier AI models – testing both their potential for misuse in cyber operations and their resilience when targeted by attackers.
Already, our work has helped shape the field:
OpenAI cites our evaluations in systems cards for GPT o3, o4-mini, and GPT-5
We are designing and building the field’s most critical defenses, including a white paper with Anthropic on confidential inference systems
Google DeepMind researchers recently cited us in their paper on emerging AI cyberattack capabilities and have used our platform in it
We co-authored the seminal paper on securing AI model weights and preventing model theft, helping to set the agenda for global AI security policy and practice together with RAND
We are a partner to governmental institutions, such as the UK government, on vetting cyber capabilities in frontier models
With $80 million in funding led by our partners at Sequoia and Redpoint, and millions in annual revenue already, Irregular is building the defensive systems that will secure the next generation of AI. This is not about governance or theory, it’s about practical tools that stop threats wherever AI is created or deployed.
By uniting applied AI research with nation-state cyber security expertise, Irregular is pioneering frontier AI security as a critical new category: the first lab of its kind.
We are a small, talent-dense team: AI scientists published in the world’s most prestigious venues; senior leaders who have scaled R&D organizations into the hundreds; world debate champions; and top cyber and national security experts who have uncovered vulnerabilities in critical infrastructure. If our mission inspires you, join us.
Irregular will help the world step securely into the future of AI.