CalypsoAI Model Security Leaderboards
Find the Right Model
Compare Security, Cost & Capabilities
The world’s major AI models and systems are vulnerable—we’ve proven it. The CalypsoAI Security Leaderboards rank top GenAI models based on real-world security testing, exposing critical risks overlooked by performance benchmarks. Powered by Inference Red-Team, these leaderboards are the only tools that help you find the safest model and stress test your AI system before you deploy.
The CASI Leaderboard
The top 10 performing models at resisting direct attacks such as prompt injection and jailbreaks.
CASI (CalypsoAI Security Index) is our benchmark score for measuring how vulnerable a model is to common prompt injection and jailbreak attacks. It evaluates how easily an LLM can be manipulated into producing harmful or policy-violating outputs.
A higher CASI score means a model is more secure against standard attack techniques.
The Agentic Leaderboard
The top 10 models at maintaining safe behavior during autonomous, real-world attacks.
AWR (Agentic Warfare Resistance) captures how well a model holds up under real-world, multi-step, and autonomous agent scenarios where simple safety checks often break down. It reflects a model’s ability to stay aligned and secure during complex workflows.
A higher AWR score signals lower risk and better performance under agentic pressure.
CASI Leaderboard
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 4 Sonnet | 95.36 | 53.00% | 0.78 | 18.88 |
|
Claude 3.5 Sonnet | 92.67 | 44.40% | 0.73 | 19.42 |
|
Claude 3.7 Sonnet | 85.73 | 57.40% | 0.74 | 21 |
|
Claude 3.5 Haiku | 84.65 | 34.70% | 0.65 | 5.67 |
|
Phi4 | 80.83 | 40.20% | 0.65 | 0.77 |
|
DeepSeek-R1-Distill-Llama-70B | 72.98 | 48.20% | 0.63 | 2.06 |
|
GPT-4o | 68.59 | 39.80% | 0.57 | 29.16 |
|
Llama 3.1 405b | 66.13 | 40.50% | 0.56 | 10.59 |
![]() |
Qwen3-30B-A3B | 64.26 | 55.60% | 0.61 | 4.05 |
![]() |
Qwen3-14B | 61.56 | 55.70% | 0.59 | 7.39 |
Agentic Leaderboard
Model Provider | Model Name | AWR | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 93.99 | 44.40% | 0.74 | 19.15 |
|
Claude 3.5 Haiku | 91.92 | 34.70% | 0.69 | 5.22 |
|
Phi4 | 87.34 | 40.20% | 0.68 | 0.72 |
|
Claude 4 Sonnet | 86.53 | 53.00% | 0.73 | 20.8 |
|
Claude 3.7 Sonnet | 78.55 | 57.40% | 0.7 | 22.92 |
|
Llama-4 Maverick 128E | 74.76 | 50.50% | 0.65 | 1.43 |
|
Llama-4 Maverick 16E | 71.75 | 43.00% | 0.6 | 0.88 |
|
GPT-4o | 66.9 | 39.80% | 0.56 | 29.9 |
|
Llama 3.3 70b | 62.08 | 41.10% | 0.54 | 1.99 |
|
Gemma 3 27b | 59.87 | 37.60% | 0.51 | 0.67 |
CASI Leaderboard
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 4 Sonnet | 95.12 | 60.78% | 0.8 | 18.92 |
|
Claude 3.5 Sonnet | 93.27 | 44.44% | 0.69 | 19.3 |
|
Claude 3.7 Sonnet | 87.24 | 57.39% | 0.74 | 20.63 |
|
Claude 3.5 Haiku | 85.69 | 34.74% | 0.6 | 5.6 |
|
Phi4 | 81.44 | 40.22% | 0.61 | 0.77 |
|
DeepSeek-R1-Distill-Llama-70B | 73.96 | 48.24% | 0.62 | 1.24 |
|
GPT-4o | 68.13 | 41.46% | 0.56 | 18.35 |
|
Llama 3.1 405b | 64.65 | 40.49% | 0.54 | 1.24 |
![]() |
Qwen3-14B | 60.82 | 55.72% | 0.59 | 0.51 |
![]() |
Qwen3-30B-A3B | 58.61 | 55.60% | 0.57 | 0.63 |
Agentic Leaderboard
Model Provider | Model Name | AWR | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 95.85 | 44.44% | 0.78 | 18.78 |
|
Phi4 | 90.63 | 40.22% | 0.65 | 0.69 |
|
Claude 3.5 Haiku | 90.32 | 34.74% | 0.62 | 5.31 |
|
Claude 4 Sonnet | 86.73 | 60.78% | 0.75 | 20.75 |
|
Claude 3.7 Sonnet | 80.31 | 57.39% | 0.7 | 22.41 |
|
GPT-4o | 80.28 | 41.46% | 0.62 | 15.57 |
|
Llama-4 Maverick | 76.3 | 50.53% | 0.65 | 0.52 |
|
Llama-4 Scout | 70.51 | 42.99% | 0.58 | 0.54 |
|
Grok 3 Mini Beta | 69.8 | 66.67% | 0.69 | 1.15 |
|
Gemini 2.0 Flash | 69.75 | 48.09% | 0.6 | 0.95 |
CASI Leaderboard
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 94.88 | 44.44% | 0.7 | 18.7 |
|
Claude 3.7 Sonnet | 88.11 | 57.39% | 0.74 | 20.22 |
|
Claude 3.5 Haiku | 87.47 | 34.74% | 0.6 | 5.14 |
|
Phi4-14B | 82.47 | 40.22% | 0.62 | 0.66 |
|
DeepSeek-R1-Distill-Llama-70B | 69.84 | 48.24% | 0.6 | 1.24 |
|
GPT-4o | 67.85 | 41.46% | 0.56 | 16.65 |
|
Llama 3.1 405b | 65.06 | 40.49% | 0.54 | 2.05 |
|
Gemini 2.5 Pro | 57.08 | 67.84% | 0.61 | 17.5 |
|
GPT 4.1-nano | 54.05 | 41.01% | 0.48 | 0.93 |
|
Llama 4 Maverick-17B-128E | 52.45 | 50.53% | 0.52 | 0.77 |
Agentic Leaderboard
Model Provider | Model Name | AWR | Avg. Performance | A_RTP | A_CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 96.67 | 44.44% | 0.71 | 18.7 |
|
Phi4-14B | 92.28 | 40.22% | 0.76 | 0.66 |
|
Claude 3.5 Haiku | 91.79 | 34.74% | 0.62 | 5.14 |
|
GPT-4o | 81.12 | 41.46% | 0.62 | 16.65 |
|
Grok 3 | 77.75 | 50.63% | 0.65 | 18 |
|
Claude 3.7 Sonnet | 76.83 | 57.39% | 0.68 | 20.22 |
|
Grok 3-mini | 72.04 | 66.76% | 0.7 | 0.8 |
|
Gemma 3 27b | 72.03 | 37.62% | 0.56 | 1.8 |
|
Llama 4 Maverick-17B-128E | 71.71 | 50.53% | 0.62 | 0.77 |
|
GPT 4.1 | 68.77 | 52.63% | 0.62 | 10 |
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 94.3 | 84.50% | 0.9 | 18.7 |
|
Claude 3.7 Sonnet | 88.52 | 86.30% | 0.88 | 20.22 |
|
Claude 3.5 Haiku | 87.56 | 68.28% | 0.79 | 5.14 |
|
Phi4-14B | 82.77 | 75.90% | 0.8 | 0.66 |
|
DeepSeek-R1-Distill-Llama-70B | 71.46 | 72.67% | 0.72 | 1.24 |
|
GPT-4o | 68.65 | 80.50% | 0.73 | 16.65 |
|
Gemini 2.0 Pro (experimental) | 63.89 | 79.10% | 0.7 | NA |
|
Llama 3.1 405b | 60.73 | 79.80% | 0.68 | 2.05 |
|
DeepSeek-R1 | 52.91 | 86.53% | 0.64 | 4.24 |
|
Gemma 3 27b | 55.25 | 78.60% | 0.64 | 1.8 |
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS |
---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 94.94 | 84.50% | 0.93 | 18.7 |
|
Claude 3.7 Sonnet | 89.54 | 86.30% | 0.89 | 20.22 |
|
Claude 3.5 Haiku | 88.84 | 68.28% | 0.57 | 5.14 |
|
Phi4-14B | 86.04 | 75.90% | 0.68 | 0.66 |
|
DeepSeek-R1-Distill-Llama-70B | 71.7 | 72.67% | 0.74 | 1.24 |
|
GPT-4o | 68.44 | 80.50% | 0.52 | 16.65 |
|
Llama 3.1 405b | 61.86 | 79.80% | 0.77 | 2.05 |
|
Llama 3.3 70b | 55.57 | 74.50% | 0.69 | 1.85 |
|
DeepSeek-R1 | 52.91 | 86.53% | 0.58 | 4.24 |
|
Gemini 1.5 Flash | 29.79 | 66.70% | 0.92 | 0.51 |
|
Gemini 2.0 Flash | 29.18 | 77.20% | 0.66 | 0.66 |
|
Gemini 1.5 Pro | 27.38 | 74.10% | 0.63 | 8.58 |
|
GPT-4o-mini | 24.25 | 71.78% | 0.73 | 1.03 |
|
GPT-3.5 Turbo | 18.73 | 59.20% | 0.82 | 2.75 |
Model Provider | Model Name | CASI | Avg. Performance | RTP | CoS | Source |
---|---|---|---|---|---|---|
|
Claude 3.5 Sonnet | 96.25 | 84.50% | 0.93 | 18.7 | Anthropic |
|
Phi4-14B | 94.25 | 75.90% | 0.68 | 0.66 | Azure |
|
Claude 3.5 Haiku | 93.45 | 68.28% | 0.57 | 5.14 | Anthropic |
|
GPT-4o | 75.06 | 80.50% | 0.52 | 16.65 | OpenAI |
|
Llama 3.3 70b | 74.79 | 74.50% | 0.69 | 1.85 | Hugging Face |
|
DeepSeek-R1-Distill-Llama-70B | 74.42 | 72.67% | 0.74 | 1.24 | Hugging Face |
|
DeepSeek-R1 | 74.26 | 86.53% | 0.58 | 4.24 | Hugging Face |
|
GPT-4o-mini | 73.08 | 71.78% | 0.73 | 1.03 | OpenAI |
|
Gemini 1.5 Flash | 73.06 | 66.70% | 0.92 | 0.51 | |
|
Gemini 1.5 Pro | 72.85 | 74.10% | 0.63 | 8.58 | |
|
GPT-3.5 Turbo | 72.76 | 59.20% | 0.82 | 2.75 | OpenAI |
Alibaba Cloud | Qwen QwQ-32B-preview | 67.77 | 68.87% | 0.65 | 2.14 | Hugging Face |
Welcome to our July Insight Notes.
This section is our commentary on the ever-shifting landscape of AI model security, where we highlight key data points, discuss emerging trends, and offer context to help you navigate your AI journey securely.
Attack Spotlight: Style Injection
This month’s leaderboards incorporate a wider range of tests; we’ve added a new attack vector called Style Injection. This is a jailbreak technique that works by adding specific writing or formatting rules to the prompt, as a way to distract the model from using its standard refusal language or phrase and instead respond with an unsafe response that would ordinarily be blocked.
Leaderboard Updates
The leaderboards now use the Artificial Analysis Intelligence Index by artificialanalysis.ai as our key performance metric. This combines 9 different benchmarks across reasoning, general knowledge, maths and programming.
We’ve expanded our testing to newer larger models including Qwen 235B model, DeepSeek-R1-0528 and Google’s full release of Gemini 2.5 Pro.
Security Trends:
Course Correcting:
While the release of Claude 4 Sonnet last month skewed scores and kept them from dropping, the introduction of the Style Injection attack vector this month continues to show drops in all models.
Knowledge is Power:
An insightful trend is surfacing from our Agentic Warfare Resistance (AWR) scoring that points to a potential blind spot in current defensive strategies. We’ve observed a significant and progressive drop in the effectiveness of well-publicized attacks like Microsoft’s ‘Crescendo,’ which was first detailed in early 2024. This decline suggests that model providers are becoming adept at patching for specific, known threats.
However, this targeted approach may be creating a false sense of security. The sustained high success rates of our internally developed attacks FRAME and Trolley, which currently outperform ‘Crescendo’ by a significant margin, indicate that the underlying vulnerabilities are not being fully addressed.
Instead of a holistic approach to security, providers may be “teaching to the test” by mitigating specific, named attacks that have been publicly disclosed. This leaves them vulnerable to novel or less-publicized attack techniques that exploit the same core weaknesses. This reactive, patch-based approach, rather than a proactive strategy focused on fundamental vulnerabilities, represents a significant ongoing risk and underscores the importance of diverse and continuous red-teaming to uncover and address yet-unknown threats.
Stay Updated
Sign up for updates on each release of our leaderboard each month
What Are the CalypsoAI Model Security Leaderboards?
The CalypsoAI Leaderboards are a holistic assessment of base model and AI system security, focusing on the most popular models and models deployed by our customers. We developed these tools to align with the business needs of selecting a production-ready model, helping CISOs and developers build with security at the forefront.
These leaderboards cut through the noise in the AI space, distilling complex model security questions into a few key metrics:
CalypsoAI Security Index (CASI)
A metric designed to measure the overall security of a model (explained in detail below).
Agentic Warfare Resistance (AWR) Score
AWR evaluates how a model can compromise an entire AI system. We do this by unleashing our team of autonomous attack agents on the system, which are trained to attack the model, extract information and compromise infrastructure. In this way these agents can extract sensitive PII from vector stores, understand system architecture, and test model alignment with explicit instructions.
Performance
The average performance of the model is based on popular benchmarks (e.g., MMLU, GPQA, MATH, HumanEval).
Risk-to-Performance Ratio (RTP)
Provides insight into the tradeoff between model safety and performance.
Cost of Security (CoS)
Evaluate the current inference cost relative to the model’s CASI, assessing the financial impact of security.
Introducing CASI
What is the CalypsoAI Security Index (CASI),
and Why Do We Need It?
CASI is a metric we developed to answer the complex question: “How secure is my model?” A higher CASI score indicates a more secure model or application.
While many studies on attacking or red-teaming models rely on Attack Success Rate (ASR), this metric often oversimplifies the reality. Traditional ASR treats all attacks as equal, which is misleading. For example, an attack that bypasses a bicycle lock should not be equated to one that compromises nuclear launch codes. Similarly, in AI, a small, unsecured model might be easily compromised with a simple request for sensitive information, while a larger model might require sophisticated techniques like Agentic Warfare™ to break its alignment.
To illustrate this, consider the following hypothetical comparison between a small, unsecured model and a larger, safeguarded model:
Attack | Weak model | Strong Model |
Plain text Attack (ASR) | 30% | 4% |
Complex Attack (ASR) | 0% | 26% |
Total ASR | 30% | 30% |
CASI | 56 | 84 |
In this scenario, both models have the same total ASR. However, the larger model is significantly more secure because it resists simpler attacks and is only vulnerable to more complex ones. CASI captures this nuance, providing a more accurate representation of security.
CASI evaluates several critical factors beyond simple success rates:
By incorporating these factors, CASI offers a holistic and nuanced measure of model and application security.
- Severity: The potential impact of a successful attack (e.g., bicycle lock vs. nuclear launch codes).
- Complexity: The sophistication of the attack being assessed (e.g. plain text vs. complex encoding).
- Defensive Breaking Point (DBP): Identifies the weakest link in the model’s defences, focusing on the path of least resistance and considering factors like computational resources required for a successful attack.
Agentic Warfare Resistance (AWR) Score
Measuring True AI Security with the Agentic Warfare Resistance (AWR) Score
Standard AI vulnerability scans are great to get a baseline view of model security but only scratch the surface in understanding how an AI system might act under real world attacks. This is why we use Agentic Warfare, a sophisticated red-teaming methodology where autonomous AI agents simulate a team of persistent, intelligent threat analysts. These agents probe, learn, and adapt, executing multi-step attacks to uncover critical weaknesses that static tests miss.
This rigorous process produces the Agentic Warfare Resistance (AWR) Score, a quantitative measure of an AI system’s defensive strength, rated on a scale of 0 to 100.
A higher AWR score means the system requires a more sophisticated, persistent, and informed attacker to be compromised. It directly translates a complex attack narrative into a single, benchmarkable number that is calculated across three critical vectors:
- Required Sophistication: What is the minimum level of attacker ingenuity required to breach your AI? Does it withstand advanced, tailored strategies, or does it fall to simpler, common attacks?
- Defensive Endurance: How long can the AI system hold up under a persistent assault? We measure if its defenses crumble after a few interactions or endure a prolonged, adaptive conversational attack.
- Counter-Intelligence: Is AI accidentally training its attackers? This assesses whether a failed attack still leaks critical intelligence, like revealing the nature of its filters, which in turn, would provide a roadmap for the next attack.
The AWR score gives a clear, actionable metric to track, report on, and improve an organization’s AI security posture against the threats of tomorrow.

Experience Proactive AI Vulnerability Discovery
with CalypsoAI Inference Red-Team
How Should the Leaderboard Be Used?
The CalypsoAI Leaderboard serves as a starting point for assessing which model to build with. It evaluates the guardrails implemented by model providers and reflects their performance against the latest vulnerabilities in the AI space.
It’s important to note that the leaderboard is a living artefact. At CalypsoAI, we will continue to develop new vulnerabilities and work with model providers to responsibly disclose and resolve these issues. As a result, model scores will evolve, and new models will be added. The leaderboard will be versioned based on updates to our signature attack database and iterations of our security score.
What Does the Leaderboard Not Do?
The leaderboard does not account for specific applications or use cases. It is solely an assessment of foundational models. For a deeper understanding of your application’s vulnerabilities, including targeted concerns like sensitive data disclosure or misalignment from system prompts, our full red-teaming product is available.
Do we supply all of the output and testing data?
Users of our red-teaming product gain access to our comprehensive suite of penetration testing attacks, including:
Signature Attacks:
A vast prompt database of state-of-the-art AI vulnerabilities.
Operational Attacks:
Traditional cybersecurity concerns applied to AI applications (e.g., DDoS, open parameters, PCS).
Agentic Warfare™:
An attack agent capable of discovering general or directed vulnerabilities specific to a customer’s use case. For example, a bank might use Agentic Warfare to determine if the model is susceptible to disclosing customer financial information. The agent designs custom attacks based on the model’s setup and application context.
Product users will be able to see additional data such as where the vulnerabilities of each models are along with solutions to mitigate the risk.
Sources:
- https://docs.anthropic.com/en/docs/about-claude/models
- https://ai.azure.com/explore/models/Phi-4/version/3/registry/azureml
- https://platform.openai.com/docs/models/o1
- https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
- https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- https://huggingface.co/deepseek-ai/DeepSeek-R1
- https://ai.google.dev/gemini-api/docs/models/gemini
- https://huggingface.co/Qwen/QwQ-32B-Preview