Skip to content

The $17.8m Business Case for Your AI Security

Read Now
AI Inference Security Project
27 Jun 2025

Point Solutions, Platforms, and the Agentic AI Security Future

Point Solutions, Platforms, and the Agentic AI Security Future

Point Solutions, Platforms, and the Agentic AI Security Future

A Conversation with CalypsoAI CTO, James White, and Anthony Candeias, CISO, Professor, and Advisor

AI didn’t arrive quietly. It came barreling into the enterprise, not through the front door, but through tools employees were already using: Google Workspace, Microsoft 365, Jira, Slack. One day, it was a curiosity. Then it was embedded across the business. And while adoption happened fast, security has lagged behind.

“AI is here,” says James White, CTO of CalypsoAI. “It’s one of those ten-year overnight successes. But security hasn’t caught up.”

In a conversation with Anthony Candeias, CISO, professor, and advisor, the two discuss how enterprises can catch up, and more importantly, how they can build a foundation that won’t buckle as the pace of innovation accelerates. Their exchange isn’t a debate, it’s a recognition that the rules have changed. Watch the full discussion below.

The Age of AI Platforms and the Case for Visibility

Anthony frames the dilemma every CISO is facing: point solutions or platforms? The technologist in him wants best-in-class tools. But the CISO in him? He wants leverage. “Give me visibility. Give me governance. Give me fewer vendors to manage,” he says. “I’ll take a platform.”

But as James points out, the ground is still shifting. AI threats are evolving faster than any product roadmap. Is it wise to go all-in on a platform in a space that hasn’t settled?

Anthony leans in: “Even if the defenses aren’t perfect, I need to see what’s happening. I’d rather have visibility than fly blind.” It’s a sentiment echoed across security teams today.

Fighting Fire with Fire

AI isn’t just powering productivity, it’s powering attacks. Anthony has seen it firsthand: bots scoring perfect CAPTCHA results, mimicking human behavior better than humans. The message is clear, AI has crossed a threshold. It’s no longer a tool to defend. It’s an adversary to defend against. “Fight fire with fire,” Anthony says. “We’re now using AI to secure AI.”

It’s a shift that requires more than rules-based systems or static filters. It demands real-time defense to block threats like prompt injection, data leaks, and harmful outputs at the source.

Securing Agents in the Enterprise

During this conversation, James and Anthony also chat about agents and how AI systems can now think, plan, and act. Not just chatbots or copilots, but autonomous actors capable of decision-making.

Here, the line between automation and autonomy blurs. What permissions should agents have? How do you know when they’re acting appropriately, or when they’ve gone off-script?

“Agents are like new hires who never sleep,” James quips. “Except they don’t ask for permission unless you make them.” That’s why the old ways of managing identity and access simply don’t apply. They need their own governance models. Their own security layers.

Anthony points to the Model Context Protocol (MCP) as one approach, limiting agents to a narrow range of defined functions. But even that has its risks. James shares a scenario where an “MCP-compliant” agent can still overload a database with a perfectly valid request. The lesson? There’s no such thing as a safe default.

Agents Need Bodyguards

As AI moves from the cloud to the edge—running on mobile devices, embedded in workflows, and acting autonomously—the attack surface becomes infinite. You’re not just defending infrastructure anymore; you’re defending decision-makers. And like any powerful operator in a high-risk environment, AI agents need a bodyguard.

“Celebrities don’t go out without one,” James notes. “Not to control them—but to protect them from threats and from doing something reckless.” The same applies to AI. Enterprises need a way to shield agents from adversarial inputs while preventing them from making costly mistakes.

Governing the Ungovernable

Can governance keep up with AI? James is skeptical. “It’s moving too fast. GRC teams don’t have time to catch their breath.”

But Anthony is more optimistic. Regulations like the EU AI Act aren’t trying to prescribe exact technical methods. Instead, they ask the right questions: How do you address bias? What happens when something goes wrong? “We don’t need more rules,” he says. “We need clearer boundaries.”

James agrees, offering a metaphor: “GRC shouldn’t try to write every rule of the game. Just define the out-of-bounds lines and then let people play.”

Securing AI Isn’t Optional Anymore

The future isn’t just fast, it’s fragmented. Enterprises are building dozens of AI apps across hundreds of use cases, layered on top of evolving model architectures. And the risk? A breach now goes beyond leaked data to leaked capabilities, lost trust, and irreversible reputational damage.

That’s why Anthony encourages CISOs to think beyond technical risk and quantify the total cost of a breach. Not just in dollars, but in stock price volatility, customer churn, lost IP, and competitive disadvantage. As AI becomes integral to how companies operate and differentiate, the blast radius of a breach extends well beyond the security org.

In this new reality, security isn’t a checkbox, it’s a condition for scaling responsibly. The question is no longer if you’ll invest in AI security, but how soon you’ll realize you can’t afford not to.

Hear it Firsthand

Watch the full conversation between James White and Anthony Candeias on the future of AI security.

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

The New AI Risk Surface: Invisible, Autonomous, and Already Inside

AI Inference Security Project

Zero Trust Isn’t Just for People Anymore: Securing AI Agents in the Age of Autonomy

Blog

A New Pricing Standard for AI Red-Teaming