Large Language Models (LLMs) burst onto the scene barely seven months ago and have dominated the discussion around artificial intelligence (AI) and machine learning (ML) ever since. Some companies have embraced them wholeheartedly. Others, some with good reason, have banned them outright. As we’ve stated before, banning them is not a sustainable solution.
So, are LLMs the problem?
Everyone in those organizations has at least one Internet-connected device on them at all times. They can use the LLMs without their employers knowing and therefore they are operating in the wild. In fact, 68% of employees admit to using them without their boss’ knowledge. This means bans are a textbook example of an entity cutting off its nose to spite its face.
To say the quiet part out loud, Amazon, Apple, Samsung, JP Morgan Chase, Goldman Sachs, and others are not afraid of the tech, they are afraid of what their people–the careless ones, the clueless ones, and the corrupt ones–might do with that tech and what that could mean for the company’s reputation, bottom line, or shareholder value, to name just a few considerations.
Or are they the solution?
They could be, depending on how fearless or afraid an organization is.
It’s well-established that deploying LLMs across your organization can bring numerous immediate advantages, from enhancing customer experiences to boosting operational efficiency, and all the downstream benefits that flow from those. Some of the guardrails that must be deployed to ensure safe, secure usage are also well-known to cybersecurity professionals, although they need to be reshaped to fit the new environment.
- Establish, implement, and explain the ethical guidelines that employees must follow when using LLMs. This ensures that they understand the boundaries, responsibilities, and consequences associated with LLM usage.
- Implement robust data governance procedures to protect sensitive information and uphold privacy regulations, and develop policies that outline data collection, storage, usage, and retention practices. If you don’t know what is leaving your organization and how it’s leaving, you won’t be able to stop the flow.
- Establish a cadence for auditing model performance with respect to the safeguards and policies you have in place. This enables your security teams to identify and address any biases or unintended consequences stemming from their use, and allows you to refine and improve how your organization uses the models over time. Engage external auditors or third-party experts to provide an objective evaluation, if necessary.
- Implement mechanisms for automated and human monitoring and feedback to identify and address potential risks or issues. Then analyze and act on the feedback to ensure the models continue to align with your organization’s values and goals.
Responsible LLM deployment requires forethought, proactive safeguards, and a commitment to ongoing, evolving best practices. Kind of like raising children or puppies. But … less messy.