Skip to content

The $17.8m Business Case for Your AI Security

Read Now
Blog
21 Apr 2025

As MCP Accelerates Agentic AI, Are Enterprises Ready for the Risks?

As MCP Accelerates Agentic AI, Are Enterprises Ready for the Risks?

As MCP Accelerates Agentic AI, Are Enterprises Ready for the Risks?

By James White, CTO, CalypsoAI

Forget A and I, the most talked-about letters in tech right now may well be M, C and P. Model Context Protocol, open-sourced late last year by Anthropic, is quickly becoming the standard for AI models to interact with software tools, data and interfaces.

A new, updated version of the MCP spec offers extra capability and interoperability and, significantly, Microsoft and OpenAI are rowing in behind it. Sam Altman, chief executive of OpenAI, posted on X: “People love MCP, and we are excited to add support across our products.” Microsoft, meanwhile, describes MCP as "a universal USB-C connector for AI”, an imperfect but eye-catching analogy. 

Their support lends momentum to MCP and Anthropic’s view of an AI world of ‘workflows’: systems where LLMs and tools are orchestrated through defined code paths. The number of MCP libraries and servers on GitHub is rising rapidly, with contributions from big-name organizations and interactions ramping up to meaningful numbers of ‘likes’ and ‘forks’, demonstrating widespread interest and engagement.

If MCP offers a structure and a defined approach, the competing view of the world is focused on true agent-based – agentic – systems, where LLMs dynamically direct their own processes and tool usage, maintaining autonomous control over how they accomplish tasks. Unlike the conscious limits of MCP, agentic holds out the promise of exponential potential – beyond current human capacity – but at the expense of higher security needs. 

The Rise of the Agents

So, what use cases will enterprises trust agents to undertake? It’s useful to consider the household chores you would trust to a robot: Ironing? Probably not, given the fire risk. Washing dishes? Too much damaged dishware. But vacuuming? Grass-cutting? Certainly. 

Right now, CISOs are thinking the same way about agents in their enterprises: what's the low-hanging fruit, the equivalent of cutting the grass and vacuuming the floor? What has a low enterprise danger level but mid-to-high return on investment, in terms of efficiency and benefit? And, critically, what tools and access are required to get the agent working? 

This is where the level of danger involved – and the corresponding controls – come into focus. OWASP is a good starting point for general guidance but figuring out what controls to apply to safeguard company- or domain-specific use cases is much more complex. For example, simply restricting an agent’s database access to ‘read-only’ reduces the risk of losing data but won’t prevent accidental DoS attacks on the database, which can dramatically damage a business. 

With the rise of MCP and the ability to create customized agents for very specific tasks, it is not far-fetched to predict there will soon be millions of agents accessing tools and data inside organizations, performing billions of tasks and learning from each interaction. This represents a new attack vector, one that is directly correlated to the number of tools that are linked to agents. 

Understanding the Agentic Attack Vector

If an organization has MCP tools that interact with database systems, now all its databases are potentially vulnerable. If the tools interact with email, any employee’s inbox is potentially wired up to a bad actor agent that can relentlessly attempt to break into the organization by phishing.

While each tool presents dangers in its own right, the combination effect is exponentially dangerous. For example, a tool that has access to both email and database systems can potentially exfiltrate private data via email, or inject content to the database from email. 

In addition, there is the risk that insiders may accidentally misuse the agent technology, potentially causing even more damage, since they are already inside the organisation's defences. An outsider has to figure out how to get inside the fort before starting a fire but an insider can potentially torch a whole section before anyone notices. 

It’s clear, therefore, that security has to stand alongside performance and cost when assessing workplace AI systems and agents. A high-performing system that is vulnerable to attack or exploitation has no place in an enterprise environment.

Understanding the agent threat is the first step to dealing with it. A mechanism to measure the threat level of any given agent is needed, to inform decision-making on whether action is required. If the security response is overcooked, it can stifle innovation; undercooked and it will fail to effectively reduce the threat. Measurement helps companies reach the Goldilocks level of not-too-hot, not-too-cold, and highlights where to fortify controls.

The Agentic Warfare Defense

In these circumstances, the best way to effectively scale up defenses is to properly understand the type and complexity of the attack, and employ appropriate measures. Traditional tools such as manual red teaming and metrics such as Attack Success Rate – which oversimplifies by treating all attacks as equal regardless of their potential blast radius – are inadequate to deal with the threats posed by agents that have the ability to learn and adapt. 

The solution is using intelligent, customizable agents to simulate adversarial interactions and automate red-teaming of AI systems for weaknesses and irregular activity. Techniques such as agentic warfare – effectively equipping enterprises with a virtual army of security agents – allow CISOs and their organizations to quantify the threat level and act accordingly. 

Once the appropriate controls are applied, it’s important that enterprises regularly reevaluate to make sure they have achieved their aims and the threat exposure has reduced. Continuous monitoring is, as always, a critical factor in maintaining a hard-won security posture. 

As MCP and agentic gain ground, a proactive approach offers the highest chance of accelerating AI for its intended purposes. This is the choice enterprises face in the AI future: embrace agents and carefully monitor them, or run the risk of them running unchecked inside the perimeter. 

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

The New AI Risk Surface: Invisible, Autonomous, and Already Inside

AI Inference Security Project

Zero Trust Isn’t Just for People Anymore: Securing AI Agents in the Age of Autonomy

Blog

A New Pricing Standard for AI Red-Teaming