DeepSeek; ChatGPT Deep Research; the $500 billion US Stargate plan: it can be difficult to keep up with the incredible momentum now behind AI. Once seen simply as an engine for Spotify recommendations and Snapchat filters, new-era generative and agentic AI are powering incredible use cases from rapid research to accelerated diagnostics and drug discovery.
These trends will certainly accelerate as the agentic AI era beds in, with agents taking human instructions to work autonomously on tasks. Indeed, Boston Consulting Group’s recent AI Radar survey of 1,803 C-level execs found that 31% plan to allocate over $25 million to AI in 2025.
Let’s run those numbers: that’s almost 600 companies saying they will spend at least $25 million each, a cumulative $15 billion, on AI this year alone. If anything, that looks conservative.
Three-quarters of the execs surveyed by BCG – across 12 sectors in 19 markets – ranked AI as a top-three strategic priority. Meanwhile, 72% of the 1,363 participants in the 2024 McKinsey Global Survey on AI said they had already adopted AI in at least one business function; 50% had adopted AI in two or more business functions.
THE DORMANT THREAT
In this dynamic world, however, each new use case opens a fresh attack surface for threat actors; what was safe yesterday is not safe tomorrow. Although threats are already in evidence, security considerations have not kept pace with the corporate clamour for AI applications.
We speak to many CISOs and some are nonchalant, almost to the point of apathy, about the risk profile they’re in, even as they embed AI into their organizations. Unsurprisingly, that attitude does change when it comes to highly regulated industries.
We’re all familiar with the well-publicized failures in early AI systems: the delivery company chatbot that swore and criticized the company; the news aggregators that invented stories; the hallucinations of legal precedents that made it to court. Even Amazon was forced to fix its AI shopping assistant, Rufus, after it allowed free access to its underlying LLM. Alongside brand damage, Amazon was on the hook for the cost of processing mischievous prompts.
At the most serious end of the scale, Character.AI is facing a civil lawsuit after a 14 year-old took his own life after long sessions with one of its role-playing chatbots. The chatbot allegedly asked the teen if he had a plan to kill himself and appeared to encourage him to do so.
By and large, those incidents are treated as unfortunate anomalies. On the contrary, they should be seen as fault lines, warnings of what can go wrong with AI, even without malicious intervention. With malice, the potential for damage is multiplied.
TREMOR DETECTION
In the security community, we already see evidence of active threats, akin to the tremors that come before an eruption. OpenAI’s latest threat report describes how a suspected China-based adversary, SweetSpecter, attempted to use ChatGPT to support an offensive cyber operation; its activity included asking the model about vulnerabilities in various applications and seeking ‘good’ names for email attachments to avoid them being blocked.
At the same time, the bad actor sent phishing emails with malware-infected attachments to corporate and personal accounts of OpenAI employees and governments worldwide. That may sound like an issue for OpenAI but it should be on the radar of anyone building applications on top of a foundation model – and the countless companies using model-based applications.
The latest Threat Insights Report from HP Wolf Security, meanwhile, details the first detected case of a malware campaign using scripts that are “highly likely” to have been written with the help of GenAI. Clues in the scripts’ structure, comments and choice of function names and variables suggest the threat actor used GenAI to create the malware.
As the HP Wolf team notes: “The activity shows GenAI is accelerating attacks and lowering the bar for cybercriminals to infect endpoints.” The same capability of AI that can be used for good can – and will – be abused by threat actors for devastating harm.
AN EXPANDING ATTACK SURFACE
It is clear that AI is set to be a dominant sector going forward. Global venture funding for AI start-ups reached $131.5 billion last year, a 52% increase from 2023. In the fourth quarter of 2024, more than half of global VC funding by value was deployed in AI companies.
These new companies are forward-focused, continually iterating and pushing out new products. On the other side of the coin, organizations across all sectors are under pressure to pursue AI’s promise. In a world where nothing is static, standard or predictable, the attack surface is continually expanding, opening gaps in security posture that must be addressed.
Even organizations that are not actively adopting AI need to be aware of the dormant threat. As we have seen, AI systems can be used to turbocharge traditional threats such as phishing, brute force attacks, dynamic malware and denial of service attacks.
The emerging AI security landscape adds in static threats, dynamic threats, operational attacks, agentic warfare and agentic defense, predominantly centered on the inference layer. Direct and indirect prompt injection can manipulate AI applications, raising security and cost concerns.
We already know all the world class models available today can be ‘broken’ to some extent. As new releases arrive and AI adoption increases, vigilance is essential, not optional.
AVOIDING AN ERUPTION
Since AI systems are dynamic by nature, new vulnerabilities can – and will – emerge unexpectedly. Without dedicated, multi-layered security protocols, even the most robust systems can falter and fail. To protect their AI investment, organizations must combine defensive security capabilities with offensive measures to proactively identify vulnerabilities.
Rather than relying on the Fire Service after the fact, organizations need the equipment to detect a future fire. Pen-testing and red teaming of AI models and applications will allow them to establish perimeters around their systems, ensuring that dormant threat never becomes active.
The message is clear: even if AI isn’t on your agenda right now, threat actors are already forging forward and your existing security concerns are impacted. With the future of AI hanging on our ability to secure it, this is an issue for the many, not the few.
Securing AI is no longer optional—it’s essential. Get in touch with the CalypsoAI team HERE to learn how we can help you safeguard your AI investments and stay ahead of emerging threats.