Skip to content

The $17.8m Business Case for Your AI Security

Read Now
Uncategorized
09 Jul 2025

Zero Trust Isn’t Just for People Anymore: Securing AI Agents in the Age of Autonomy

Zero Trust Isn’t Just for People Anymore: Securing AI Agents in the Age of Autonomy

Zero Trust Isn’t Just for People Anymore: Securing AI Agents in the Age of Autonomy

The Zero Trust model—never trust, always verify—has become foundational to enterprise cybersecurity. But what happens when it’s not just people accessing systems, but AI agents?

In a recent conversation between CalypsoAI CTO James White and CISO, Professor, and Advisor, Anthony Candeias, the two explored how AI is forcing security leaders to rethink what Zero Trust means, because the stakes are no longer limited to human credentials or bad actors with phishing kits. They're expanding to agents that think, plan, and act.

When "Safe by Design" Isn’t Safe Enough

There’s a growing belief that protocols like Model Context Protocol (MCP) offer a “safe” way to deploy AI agents—limiting what they can access or do. But James pushes back on that idea: “MCP is seen as the safe protocol… but it’s a false narrative. It limits the upside, but it doesn’t limit the downside.

He explains how even seemingly safe actions, like an agent querying millions of customer records using a read-only SQL statement, can overwhelm databases, lock up resources, and create vulnerabilities. All without breaking the rules. This isn’t a bug. It’s a byproduct of giving agents tools without strong oversight.

Zero Trust for Agents Starts With Nothing

That’s why Anthony argues it’s time to apply Zero Trust not just to users, but to AI agents. And not as a metaphor. As a framework: “You deny every action by default. There has to be human-assisted permissioning along the way.”

This model assumes every agent is untrusted at the start. Access is not just role-based, but use-case-based, tied to time, context, and intent. Need to read from a table? You get access to that table and nothing else. Need to take action? That action is scoped, logged, and monitored.

And just like today’s concerns around leaked API credentials, agent keys and model permissions must be governed with the same scrutiny we give to cloud workloads and privileged users.

Why This Shift Matters

As AI becomes more embedded across workflows, security can’t rely on legacy assumptions. Agents don’t forget. They don’t fatigue. And if misused, they don’t ask for forgiveness.

Zero Trust for agents means:

  • Deny by default: Agents don’t start with access, they earn it
  • Human-in-the-loop escalation: Sensitive permissions require explicit approvals
  • Use-case scoping: Access is contextual, not universal
  • Observability by design: Every request, response, and output is logged, analyzed, and auditable

This approach ensures that the speed at which AI operates doesn’t become a liability.

A New Perimeter for a New Era

This is where the concept of the inference perimeter comes into play: creating dynamic guardrails around how AI agents interact with systems, data, and users in real time. Just like we once redefined network perimeters for the cloud, we now need to redefine behavioral perimeters for autonomous systems.The bottom line is that Zero Trust needs to grow up, because AI agents are here, they’re powerful, and they don’t come with good instincts. Security teams must build systems that assume every agent interaction could go wrong, and provide the oversight to make sure it doesn’t.

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

The New AI Risk Surface: Invisible, Autonomous, and Already Inside

Blog

A New Pricing Standard for AI Red-Teaming

AI Inference Security Project

The Achilles' Heel of the AI Enterprise: Why Your Single-Provider LLM Strategy Is a Ticking Time Bomb