Skip to main content

Data loss prevention (DLP) is most often discussed in terms of fortifying against external attacks by threat actors upon a network or other sensitive system. We talk about updates, patches, multi-factor authentication, and user authorization. When artificial intelligence (AI) systems and AI-dependent systems are brought into the discussion, the focus tends to shift to what people send out into the world, wittingly or unwittingly, through lax or lapses in digital security measures; in other words, internal threats. 

The rapid and wide introduction of large language models (LLMs), such as ChatGPT, onto the world stage has added another threat surface to the mix: “conversations” with LLMs in which the prompts received and the responses provided are recorded and saved, not for posterity, but for future model training and, subsequently, as additional content for that model’s knowledge base. If a prompt includes sensitive or proprietary company data, intellectual property (IP), or personally identifiable information (PII) about employees or customers, the organization is immediately faced with risks from every direction: system integrity, consumer, shareholder, and employee trust, reputational harm, and financial impacts. But nothing can be done to undo that loss of data. 

Human-dependent safeguards, such as employee training and strong system configurations, are part of a solid first line of defense. But LLMs require a new, specialized approach to “perimeter defense,” not so much due to their structure or purpose, but due to the way we interact with them. We know they are systems with access to staggering amounts of information and incredible compute power. We know that. And yet…using them can have a lulling effect that implies a safe, private space. 

This approach, even for LLMs that are not “conversational,” can create a reflex not unlike that of a well-crafted phishing email in that a person who ought to and often does know better does exactly what they should not do, and then it’s game over. The information is irretrievably out there.

The initial response to LLMs by organizations from school districts to banking behemoths was a total ban. But that is a stop-gap measure at best. A better solution, and one that is now available to organizations, is CalypsoAI’s model-agnostic, user-friendly security and enablement system that provides deep observability across the organization’s AI security infrastructure, as well as visibility into model and user interactions and behavior. Customizable scanners review prompts for confidential, private, and sensitive data, legal documentation, and source code, as well as organization-specific acceptable-use issues,  preventing the prompts from being sent to an LLM. Scanners also review LLM responses for content ,such as malware and spyware, and prevents it from entering the organization’s ecosystem.  

CalypsoAI’s platform has a simple user interface and responds with personable replies when prompts need to be edited to meet acceptable use policies. Because it is LLM agnostic, it can be used with any language model, and it  includes policy-based access controls, enabling the organization to provide multiple options with restricted access; for instance, ChatGPT, Cohere, and AI21 for general use, BloombergGPT for Finance teams, Harvey for Compliance, Legal, and Lobbying teams, and other specialized, task-specific models to fit the organization’s business needs. 

Scalable across the enterprise, CalypsoAI is the first solution to provide a safe, secure research environment and fine-grained content supervision without introducing latency. Chat histories are fully tracked for usage, content, and cost, and are fully auditable. CalypsoAI is a groundbreaking solution that can accelerate deployment of reliable, resilient, trustworthy AI within your organization today