Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

The CEO’s long-time executive assistant. Or maybe that summer intern in Accounting.  Or any other human working in your organization. 

That’s right—the most significant risk to your organization is someone on the payroll (and the network). 

That doesn’t mean it’s time to go all-out John Le Carré to find the mole. The person who could put the organization at risk simply by using the large language model (LLM) or natural language processing (NLP) model available to them probably doesn’t have an ulterior motive. In fact, their motive could be quite admirable: to streamline their productivity and enhance their work product. They don’t know they’ve done, or are continuing to do, anything wrong. 

So perhaps the real culprit here is a cybersecurity program that hasn’t addressed AI security concerns, such as:

Educating employees about the risks they face and the risks they invite when working with NLP models

Establishing protocols to be followed when using NLP models    

Installing guardrails to ensure prompts don’t contain company or other private data

Installing guardrails to ensure responses don’t contain malicious code, damaging content, or content that violates acceptable use policies

Integrating AI security-specific tools into the broader cybersecurity infrastructure

The first two steps can be implemented rapidly across the organization, at least as a start. It falls to the Chief Information Security Officer (CISO) to bring together the teams responsible for employee education, training, and cybersecurity to collaborate on identifying and addressing the most pressing security vulnerabilities the organization faces. And then the CISO must ensure they push that information to every person in the company in the short term, while taking steps to establish a more in-depth, long-term education plan. 

A financial services or healthcare organization, for example, would likely consider data leakage or sharing of personal data, such as customer names, Social Security Numbers, and account numbers in the first instance, and patient identities and medical issues or outcomes in the second, to be the most critical vulnerability. Should such information be sent outside the organization in a prompt to an LLM, such as ChatGPT, it could become part of the data used to train the next iteration of the model. Even if it wasn’t used as training data, the organization could face disastrous reputational damage and potential legal liability should it become publicly known that the data was shared at all. 

An engineering, software, or other tech company might be worried about proprietary content, such as source code, going out of the organization via a prompt to a model, and even more concerned about what sort of code might come back in the response. If a developer asks a model to review or write code for a specific issue and then either doesn’t know what to look for in terms of errors or bugs in the returned code or, even worse, doesn’t realize they should review the code for issues, but uploads it into the code repository with no one the wiser, trouble could be on the horizon. If the code isn’t malicious or buggy, it could just be inferior, but once it’s embedded and becomes a part of the overall software interdependencies, it will be very difficult and time-consuming to extract or fix. Meanwhile, it could affect the product and, ultimately, the company’s reputation. If the AI-generated code is malicious, that’s a whole other world of hurt. It could be sleeper software that waits for a trigger to execute or code that comes in and instantly takes over, copying data and sending it back to the mothership to be pirated, ransomed, or sold to the highest bidder. And the damage might not be known for days or weeks. Or maybe months.  

The last two steps require the CISO to engage in considerably more persuasion and more planning, although the time frame might be just as compressed. Initiating these steps requires buy-in from the C-suite for funding and other resources, and they require buy-in from the other functions across the organization, such as HR, Operations, Marketing, Facilities, Legal, Compliance, Finance, Engineering, to ensure they are implemented.  

With so many LLMs, NLP models, and other generative artificial intelligence (GenAI) models on the market, and more appearing every week, it is not far-fetched to consider that each corporate function could be using its own model. Finance could be using BloombergGPT or FinGPT. Legal could be using Harvey; Marketing could be using Midjourney; and Engineering could be using Copilot. And everyone in the company could have access to ChatGPT, BERT, or any of the other large LLMs. Each team faces unique risks as they use the models and each model presents unique characteristics, features, and, yes, vulnerabilities that must be addressed.   

Every AI model introduced to an organization’s digital security infrastructure presents an integration challenge, a new attack surface, and untold other complications, depending on the corporate environment, system configuration, and industry standards. Large, open-source models or those that rely in part on open-source components present different opportunities and risks for the individual user and the administrators of the system they run on, as do smaller, focused models that rely on private or proprietary data. And all of them provide potential opportunities for bad actors to do their thing. 

Organizations—CISOs, if we want to be absolutely clear—must take a top-to-bottom, multi-faceted, cross-functional approach to identify, design, and implement appropriate measures that will ensure the privacy, safety, and security of personal and corporate data when users work with GenAI and NLP models. The effort must be ongoing and include continuous monitoring for new threats, which seem to emerge daily.

Taking these steps, when combined with ensuring your personnel—from the boardroom to the mailroom—have a thorough understanding of the potential risks, will go a long way toward ensuring the security of your organization’s people, models, and data. And your peace of mind.