- Be Concise and Clear: Avoid including unnecessary details that could reveal confidential information. These could include personal or project names, dates, destinations, and other particulars.
- Use Hypothetical Scenarios: When seeking AI assistance for sensitive tasks, frame requests in hypothetical terms. Do not use real names of companies, people, projects, or places.
- Maintain Awareness: The AI security training and education you provide to employees about the risks of oversharing should be reinforced with frequent reminders and helpful guidelines about best practices when using AI tools.
- Implement Oversight Mechanisms: Monitor the usage of AI tools to detect and prevent potential data leaks and identify potential internal threats.

Blog
30 Apr 2024
DLP and TMI in the Age of LLMs
DLP and TMI in the Age of LLMs
DLP and TMI in the Age of LLMs
Since the dawn of social media two-plus decades ago, people everywhere have grown more, and more, and more comfortable sharing information—sometimes considerably beyond what’s necessary. Our collective journey from posting too much information (TMI) on MySpace and Facebook to sharing Instagram and Pinterest photos of every minute of our lives to having human-like, AI-powered conversations with large language models (LLMs) has been swift. Unfortunately, within the context of Generative AI (GenAI), oversharing isn't just a social faux pas, it's a significant security risk, particularly for organizations.
GenAI models, such as LLMs, offer remarkable capabilities in terms of generating all sorts of content—including accurate and relevant content, which will be addressed in a future post—but they are the most porous of information sieves and pose a substantial risk when fed detailed, private, or sensitive information. The ease of interacting with LLMs can lull users into a false sense of security, leading to unintentional oversharing of critical data and instances of inadvertent data exposure through LLMs. In fact, it has become so common that it fails to raise many eyebrows any more when reports surface of executives sharing strategic company documents, physicians entering patient details, or engineers uploading proprietary source code into a public model, suching as ChatGPT or others. These are just a few examples of how sensitive information can be compromised unintentionally because the person sending the information didn’t realize it becomes part of the model’s knowledge base. There is no getting it back and there are no do-overs. It’s out there. Forever.
The key to leveraging the power of LLMs without compromising security lies in the art and science of creating prompts. Crafting a prompt that is detailed enough to elicit the desired response, yet discreet enough to protect sensitive information, is imperative. This requires a thoughtful approach to crafting the prompts, balancing the need for specificity with the imperative of discretion. Some tips to impress upon the users are: