Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Organizations have been using artificial intelligence (AI)-dependent tools, from spam filters to  robots, for a while now, and natural language processing (NLP) models, also called large language models (LLMs) are the newest kid on the block. They can increase productivity, save time and money, and–let’s face it–be big fun. However, those of us who wear the white hats aren’t the only ones who’ve discovered those traits. The cyber criminals have, too. 

Cybersecurity has been part of the mainstream lexicon and daily life for more than 20 years, and yet organizations still get hit by attacks on their legacy systems: phishing emails, unpatched infrastructure portals, obsolete and now-porous software that isn’t in use, but remains live somewhere on the network. Organizations that are willing to embrace generative AI (GenAI) models, such as LLMs, must understand that these new technologies cannot be protected by legacy safeguards.

That’s not to say existing security measures, such as multi-factor authentication, user permissions, regular patching and updating, etc., can be forgotten. They definitely cannot. But new AI security tools must be integrated into or on top of the existing security apparatus. We’ve put together this list showing the 10 key threats and the differences between legacy cybersecurity measures and those tailored for AI-dependent systems. 

Nature of the threats

Legacy system: Focuses on threats like viruses, malware, phishing, and  distributed denial of service (DDoS) attacks

AI-dependent system: Focuses on adversarial attacks targeting AI-dependent models (e.g., poisoned data attacks, model inversion attacks, prompt injection/jailbreak attacks)

Scale of data

Legacy system: Concerned with safeguarding databases and user information

AI-dependent system: Deals with vast datasets used for training, which can expose sensitive information if not handled correctly (e.g., data leakage)

Integrity of data

Legacy system: Emphasizes ensuring data isn’t stolen or tampered with

AI-dependent system: Focuses on ensuring data used to train the model wasn’t biased, which could result in unfair or unreliable AI predictions, or corrupted, which could result in bad decisions

Dynamic updates

Legacy system: Patches and updates are periodically applied

AI-dependent system: Models may be retrained and updated more frequently, potentially making them “moving targets”

Complexity of systems

Legacy system: Centers on securing well-understood IT architectures

AI-dependent system: Has to deal with the black-box nature of some AI-dependent models, making vulnerabilities harder to predict

Transparency and explainability

Legacy system: Less emphasis on understanding the exact functioning of every component

AI-dependent system: Needs measures to ensure AI-dependent models are interpretable and their decisions can be explained, thus making anomalies and vulnerabilities easier to see

Attack surface

Legacy system: Focuses on networks, endpoints, and servers

AI-dependent system: Introduces new surfaces like training data sources, model parameters, and inference inputs

Insider threats

Legacy system: Emphasis on preventing unauthorized access

AI-dependent system: Concerned about those with authorized access introducing biases, backdoors, or malicious code

Continual learning and adaptation

Legacy system: Security measures are largely static until updated

AI-dependent system: AI can be used to develop self-learning cybersecurity systems that adapt in real-time to new threats

Interconnected threats

Legacy system: Systems are isolated, and breaches in one segment might not affect others

AI-dependent system: AI-dependent models, especially those deployed in the cloud, could be interconnected, and a vulnerability in one model or dataset might propagate to others

To be clear, it’s not just up to the cybersecurity team to protect the organization. A successful, comprehensive cyber  and AI security program must be a cross-functional, top-to-bottom effort that includes employee education about what the risks are, updated policies that address issues such as if, when, and how employees are allowed to access in-house AI security tools from their personal digital devices, such as smartphones, laptops, and tablets, and other measures we’ve addressed in previous blogs. 

No digital system will ever be an impenetrable fortress; there will always be some vulnerability somewhere that the humans and the solutions they employ have missed. While it can be more challenging to detect and identify compromised AI-dependent models than it might be to detect flaws in traditional networks, the potential for significant damage is just as great. Organizations that acknowledge and prepare to face existing and emerging threats are going to be better able to defend against and respond to malicious activities directed toward them.