Skip to main content

When large language models (LLMs) and other generative AI (GenAI) models were launched, they dazzled the business world with their bright, shiny, transformative possibilities —and they delivered the goods. 

These technologies have completely changed the way businesses work, from strategizing investments and market tactics, to running operations, to crafting customer- and client-facing content, from marketing emails to legal documents to user manuals. And the models themselves have evolved at an unprecedented pace, iterating from large public foundation models, such as ChatGPT, to fine-tuned, industry-targeted models, such as BloombergGPT, to retrieval-augmented generation (RAG), internal, and proprietary models. 

This is a phenomenon in its own right: Ever-smaller models are trained on proprietary data for the purpose of accomplishing targeted tasks within an organization, department, or team. The RAG process involves entering specific company data—for instance, legal documentation, customer service records, or marketing campaigns—into the model, training the model to identify and analyze patterns and other content, iterating to attain performance thresholds, and then using that model for its intended purpose. 

When proprietary data is involved, the need for policy-based access controls (PBAC) that allow only identified personnel the opportunity to engage with the model and its data becomes extremely important. Much like organizations use Active Directory and other permissioning systems to segment and control access to data, access to models can be segmented according to groups or individuals, based on company policy, business need, or other enterprise-specific determinants. As the use of multiple and multimodal models across the enterprise continues to expand, controlling access to them will continue to be an important element of business operations, as well as a critical factor in AI security: 

Protecting sensitive data, intellectual property (IP), legal documentation, personally identifiable information (PII), and other confidential content becomes easier when access to such information is granted only to employees or others with established authorization. The risks of data theft decrease as the layers of security increase. 

Achieving, tracking, and maintaining compliance with industry standards, such as for automated decision-making, company acceptable use policies, governmental data privacy and security regulations, such as the General Data Protection Regulation (GDPR), and other emerging guidelines becomes manageable when working with smaller groups and targeted or diverse requirements. 

As cost structures for model use will continue to fluctuate before they settle down, resource allocation ranks high on the list of benefits afforded by controlling model access. Limiting the number of teams or employees who have access to a model enables efficient usage controls for computational resources and allows organizations to right-size licenses, subscriptions, or seats for models, which gains importance when multiple models are in use. It also allows the opportunity to establish rate limits for the models, which can serve as a blocker for model denial of service (DoS) attacks. 

When organizations understand that they can implement robust PBAC on GenAI models, the decisions about  which models they need,  which types of models they need, and how to deploy and secure them across the enterprise become much simpler. This critical security feature limits potential security vulnerabilities by minimizing unnecessary access to sensitive data and reduces the risk of insider threats. 

Deploying a comprehensive LLM security solution, such as CalypsoAI’s customizable security and orchestration platform, that provides full observability across all models in use within the enterprise is the missing link in a typical multiple-LLM roll-out. This model-agnostic, API-powered, SaaS-enabled trust layer solution adds no “weight” to the infrastructure, scales effortlessly, and integrates seamlessly with the existing security apparatus while providing fine-grained access controls for groups and individuals without increasing compute costs or introducing model latency.

As usage of advanced GenAI models, such as LLMs, becomes increasingly common, as well as increasingly diversified, within organizations, security must be built in at the early stages of deployment. PBAC must be prioritized as a critical means of mitigating risk while enhancing productivity, innovation, and competitive advantage.    


Click here to schedule a demonstration of our GenAI security and enablement platform.

Try our product for free here.