Introduction
Developments in artificial intelligence (AI) are happening so fast that standing still on security issues isn’t even an option. And if you’re only keeping pace with emerging threats, you are already way behind. A recent CalypsoAI Security Market Survey highlighted a concerning disconnect in the AI security sector: While 80% of IT security leaders acknowledged the importance of threat detection, only 37% expressed "extreme concern" about their models being compromised. This complacency, in the face of an expanding attack surface, increasing threat vectors, and deepening complexity of novel threats, is alarming. The evolution of AI technologies brings with it a complex mix of risks and vulnerabilities that organizations must navigate as they appear, ready or not. The sections below discuss the result this toxic mix bestows on organizations deploying large language models (LLMs) and other generative AI (GenAI) models.The Expanded Attack Surface
The integration of AI-dependent tools across myriad teams and departments in an organization has introduced an equal number of new external vulnerabilities, such as unprotected targets for cybercriminals, as well as internal headaches. For example, “shadow AI” tools that are deployed and used without the security or IT teams knowing they have been introduced into the system can provide entryways to a network, and inconsistently applied permissioning controls can allow users access to information they should not see. The rapid adoption of technologies like LLMs and GenAI models further complicates this landscape.The Role of LLMs
As if LLMs and other GenAI models have not added enough layers of complexity and urgency to enterprise-level security considerations already, new models and new iterations of existing models are being released at a breathless pace, and their user interfaces continue to simplify. While these updates are generally positive and make the models more accessible to a broader audience, such rapid changes also bring about:- Increased Amateur Coding Risks: Individuals with limited technical experience can now generate application code, often laden with flaws and vulnerabilities, thereby unintentionally expanding the attack surface.
- Expanded Access for Malicious Actors: People with malicious intent and moderate skill can easily leverage the models’ extraordinary capabilities to bypass internal guardrails through “jailbreak” attacks and other adversarial attacks that can hijack or compromise the model, posing a significant security threat to systems, data, and the models themselves.
- Unconstrained Generative Capabilities: LLMs operate without the limitations of human imagination, conscience, or ethics, creating patterns, developing solutions, and making decisions that can seem wonderfully inventive. However, those same patterns, solutions, and decisions can just as easily be impractical, illogical, and/or dangerous. Without strong technical controls applied to the models, threat actors can misappropriate them and instruct them to generate novel, difficult-to-anticipate attacks.