Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Large language models (LLMs) like ChatGPT appeared in late 2022 and caused a tidal wave of awareness about the power of AI. Many businesses immediately recognized this technology’s potential as transformative, on par with the advent of the World Wide Web or the cloud, and sought to harness its power while giving little thought to safety or security. Many business enterprises slipped into frustrated hesitancy, unsure of how to adapt, and some simply froze. 

Since their arrival, LLMs have completely changed the ecosystem, from the way organizations plan investment strategies and market tactics, to revamping corporate operations, to crafting legal documentation. And the models themselves have continued to change, iterating from large public foundation models, such as ChatGPT and Bard, to fine tuned, industry-targeted models, such as Harvey and BloombergGPT, to organizations increasingly fine-tuning or building retrieval-augmented generation (RAG) models, and, lately, to internal models.

This most recent development is a phenomenon in its own right: These internal models are trained on proprietary data for the purpose of accomplishing targeted tasks within an organization, department, or team. The RAG process involves entering specific company data—for instance, legal documentation, customer service records, or marketing campaigns—into the model, training the model to identify and analyze patterns and other content, iterating to attain performance thresholds, and then using that model for its intended purpose. 

The use of internal RAG and fine-tuned models enables the use of policy-based access controls (PBAC) that allow only identified personnel the opportunity to engage with the model and its data. Much like organizations use Active Directory and other permissioning systems to segment and control access to data, access to information and models can be segmented according to teams or individuals, based on company policy, business need, or other enterprise-specific determinants. As the use of models across the enterprise continues to expand, the use of permissioning will continue to be an important element of business operations, as well as a critical factor in LLM security. 

  • Protecting sensitive data, intellectual property, legal documentation, personally identifiable information, and other confidential content becomes easier when access to such information is granted only to employees or others with established authorization. The risks of data theft decrease as the layers of security increase. 
  • Achieving, tracking, and maintaining compliance with data privacy and security regulations, industry standards, such as for automated decision-making, company acceptable use policies , and other guidelines becomes manageable when working with smaller groups and targeted or diverse requirements. 
  • As cost structures for model use will continue to fluctuate before they settle down, resource allocation ranks high on the list of benefits afforded by model permissioning. Limiting the number of teams or employees who have access to a model enables efficient usage controls for computational resources and allows organizations to right-size licenses, subscriptions, or seats for models, which gains importance when multiple models are in use. 
  • The ability to customize model actions or activities based on specific use cases and/or user groups streamlines functionality, and the ability to customize filters and scanner sensitivity enables responses tailored to meet organizational needs.

Knowing that the capability to implement robust PBAC on generative artificial intelligence (GenAI) models is available to them is a tremendous lift for organizations still struggling to determine which models—and which types of models—they need, as well as how to deploy and secure them across the enterprise. This critical and well-understood component of an organization’s security infrastructure limits potential security vulnerabilities by minimizing unnecessary access to sensitive data and reduces the risk of insider threats. 

Identity and access management (IAM) systems and role management tools that centralize user authentication, authorization, and auditing are key to effective PBAC implementation. However, the risk of technology sprawl, which occurs when too many applications and devices in a system don’t integrate well or at all, must also be taken into account as it can create vulnerabilities where none may have existed. 

Deploying a comprehensive LLM security solution, such as CalypsoAI that functions as an enveloping trust layer and encompasses all models in use across the enterprise is the missing link in a typical multiple-LLM roll-out. A single, model-agnostic, scalable solution that adds no “weight” to the infrastructure and seamlessly integrates with the existing security apparatus, our solution also provides fine-grained permissioning for groups and individuals across an organization without increasing compute costs or introducing model latency.

As usage of advanced GenAI models, such as LLMs, becomes increasingly common, as well as increasingly diversified, within organizations, security must be built in at the early stages of deployment. Permissioning via IAM and PBAC must be prioritized as a critical means of mitigating risk while enhancing productivity, innovation, and competitive advantage.