Introduction
Since ChatGPT was released in November 2022, we’ve seen the rapid adoption of generative artificial intelligence (GenAI) models across enterprises without any safeguards or guardians in place to hit the brakes or shut down the engines. Governmental bodies are struggling to come up with regulations that will protect both users and innovation, while preventing criminal or simply rapacious behavior by individual threat actors, criminal enterprises, non-state entities, nations, and corporations, including those creating and profiting from the models. The hard work of governance for LLMs has fallen onto CISOs. Crafting a set of AI security policies and corresponding controls for an organization is daunting because it is not possible to have one set of policies that addresses all things AI. For instance, AI-specific policies will need to touch, expand, or even replace existing IT procurement, deployment, security, and use policies; legal and compliance policies around everything from document retention to intellectual property ownership to writing contracts; human resources policies addressing automated résumé-scanning applications; and right on down the line for every function and business unit in the company. We’ve written elsewhere about establishing a small, cross-functional committee to determine first principles for an AI security governance framework, and issues to take into account during those discussions. This blog post provides a starting point for crafting basic policies addressing the most pressing concerns in the AI-adopting ecosystem: safe deployment and secure use of GenAI models across the enterprise. Our model-agnostic security and enablement solution for LLM adoption addresses many of the key issues identified in the sample policies outlined below. In addition to the policy outlines, we have created a suggested template for creating your own AI security policies. The template is available to all as a previewable and downloadable Word file at this link, and includes a fully populated example of an Acceptable Use policy. These examples do not constitute legal advice. Please consult legal counsel prior to implementing AI security policies.Sample Policies
Model Integration Policy
The first step in deploying an LLM across your organization is determining how you are going to do so. Issues to consider when determining this include :- The number of models your organization requires now
- Build in additional capacity in the early stages to enable scaling up the number of users and the number of models
- Whether your models should be public (ChatGPT, Claude, Bard, etc.), private (bespoke or integrated into other applications, such as Salesforce), or internal, a combination thereof
- What type of platform best suits deployment across your organization: SaaS, private cloud, public cloud, on-premise, air-gapped, or some other type
- Note: Constraining the use of SaaS applications using LLMs is not especially feasible given the large number of LLMs already in use by SaaS vendors
- Ensure any platform has the capability to scale up and down
- Network segmentation
- Compatibility with existing software and systems to reduce technology sprawl
- Redundancies
- Security gaps
User Access Policy
Clearly identify who will be allowed to use which models, how often, and for what purposes.- Is use limited to employees and contractors, or will others, such as business partners, customers, etc., be granted full or limited access?
- Will access to the network/system allow full access to the model(s) or will there be segmented permissions or policy-based access controls (PBAC) applied to the model(s)?
- Will different groups or departments require different features within a given model or should some groups be barred from using certain features?
- How/why are access or permissions to perform administrative activities assigned?
Acceptable Use Policy
The issue of what can be considered an acceptable or unacceptable use of models will vary significantly from organization to organization. The sample policy accompanying this blog post is an Acceptable Use Policy and can provide insight into just how specifically or vaguely such a policy must be written. General issues to consider include:- How the model should be used; for instance, only for company business, in accordance with the other related policies; e.g., no personal use; no use for other business entities (side gigs); no criminal or other illegal, unethical, or dangerous uses, etc.?
- What would a banned terms filter include? For instance, certain anatomical terms might be banned across a civil engineering firm, but not across a pharma or medical diagnostic equipment firm.
- Resource allocation can affect use, as well; in the early stages of deployment and adoption, developing a hierarchy of tasks can be helpful, allowing for planned expansion to include additional tasks as use cases and business needs evolve.
Prompt Content Policy
Prompt content is aligned to acceptable use, but has a different scope. Issues to define include:- Allowable content when using a public model, such as non-specific terminology, generic questions, etc.
- Prohibited content across all models, such as profane, biased, derogatory, toxic, and other such terms, and personally identifiable information (PII)
- Prohibited content when using a public model, such as company-specific terms (brand names, project names, etc.); the company’s name; competitors’ names; company documentation (contracts, emails, agendas, calendars, meeting minutes, raw data, source code, policies, procedures, guidelines, etc.)
- Prohibited content when using private or internal models, such as topics relevant to small groups of internal users, such as payroll or business strategies
Model Output Use Policy
Handling model output is a thorny subject given the poor internal controls present in most public models. Therefore, organizations must develop and enforce strong internal policies that provide guidance for users. Issues to consider include:- Human Verification
- Before any model-generated content can be used for any purpose, including being incorporated into company documentation of any sort, all model output must be human-verified to be accurate, factual, and/or genuine, as relevant, or otherwise not fictitious, imagined, or unsubstantiated.
- Verification must include the name of the verifier and a timestamp.
- Using Unmodified Content
- Model output that is not significantly modified prior to inclusion in company documentation must be identified as such, for instance with a footnote, endnote, or link to the prompt.
- Code provided by a model must not be used without review and verification by [two] senior staff members with expertise in the specific language used for that code.
- Verification must include the name of the verifier and a timestamp.
Use Monitoring and Auditing Policy
As model adoption becomes more ubiquitous, observability is becoming more and more critical. Knowing what your users are doing and understanding how the models are performing are key to efficiency and efficacy.- Model this policy after your organization’s email privacy policy (e.g., no expectation of privacy), if allowed by state electronic monitoring laws.
- Ensure employees understand their use of the model, whether from a personal or company-owned device, is monitored and tracked by automated applications, with real-time alerts generated to the user in some cases or the admin in others.
- Tracking some information, such as user sentiment, is not allowed in certain jurisdictions.
- Purging user data can be required in certain jurisdictions or under certain circumstances.
- Identify what information is tracked and why, for instance model usage parameters (cost, accuracy, etc.), content (prompts/responses), verifications, or user engagement.
Prompt/Response History Retention Policy
As noted above, the ability to track and audit user interactions can provide many very useful data points, but some regulatory bodies have begun considering user privacy within this context. Ensure your organization understands and appropriately implements any such rules. Issues to consider include:- Identifying how long content (prompts, responses, alerts, etc.) is retained by default, whether such content can be retained longer for specified purposes, and which roles would make that decision.
- Describe the process for purging content (cadence, whether automated or manual, etc.).
- All prompts and responses are retained for at least (X time), or longer, if deemed necessary by (role/s).
Third-Party SaaS Control Policy
Using third-party software, such as through a SaaS provider, requires additional considerations.- SaaS vendors must disclose if an LLM is in use as part of their application/product and what it is used for.
- SaaS vendors should provide enterprise customers with an AI Bill Of Materials (ABOM) describing the LLM usage and human validation process.
- SaaS vendor policies must be enforced through the third-party governance function coordinated with the procurement/licensing process.