“Moat is another term for a sustainable competitive advantage. It is a startup’s ability to maintain a competitive advantage for protecting its market share and long-term profits from its competitors.”
The typical references to “LLM economics” or “prompt economics” are monetary: How much will the necessary resources (human, compute, cloud, and associated deployment outlays) affect the bottom line and when will the organization begin to see a return on investment (ROI)? There are plenty of articles, blogs posts, interviews, and opinion pieces out there that offer quantitative insights into all of those and other finance-related issues.
This blog post isn’t one of them.
This post considers the age-old dilemma of balancing the initial outlay vs the cost of waiting in qualitative terms, including operational effects and non-monetary ROI. It identifies the issue behind the dollar signs: What is the real cost of implementing generative artificial intelligence (GenAI) tools, such as large language models (LLMs) vs the opportunity cost of not implementing them? In other words, if it’s too expensive to deploy LLMs now, or too complicated, or too messy, there are easy, almost rote arguments that support the delay: We need the right people in place to (deploy, train, maintain, etc.–take your pick), we need more money on the balance sheet, we need to complete Project X first, and so on.
Certainly waiting for a target date down the line, the occurrence of a long-planned corporate event, a market action, or a fireball crossing the sky at midnight will allow for lower costs later. But keep in mind that the term “lower costs” won’t apply just to dollars, but to:
- Diminished competitive advantage, if your competition is using AI
- Ponderous operations as your teams are still executing at human speed instead of at the speed of AI
- Decreased relevance in the marketplace as customers realize you’re stagnating
- Loss of human talent as key people depart for firms offering them the opportunity to learn new skills and work with new technology
Hugo Huang summed it up rather succinctly in a recent Harvard Business Review article when he wrote, “[t]hroughout business history, the advent of pivotal technologies has consistently heralded disruptive shifts. Enterprises that fail to adapt to these innovations face extinction.”
As democratization continues to change the face of AI, deployment, even deployment across the enterprise, can be executed incrementally if the organization has identified its needs. For example:
- Have the probable users and utility been identified, such as who gets to use it, how often, and for what purpose?
- Has aligning the model to core business objectives been discussed and decided?
- Have the risks and benefits of open-source vs private models been discussed and decided?
- Is there a hierarchy for key qualities, such as scalability, performance/response quality, latency/speed, and privacy/confidentiality?
- Will the model be used for text-based tasks, such as content generation, summarization, or translation, or will it be a chatbot that must interact with users via voice or text?
- Will the model be unimodal or multimodal?
- How many models are needed?
- Must the model(s) be trained on specific types of data, for instance medical, financial, pharmacological, scientific, etc.?
- What size context window is required to optimize usage?
- Must the activities comply with industry standards or government regulations, such as privacy rules or security requirements?
- What sort of infrastructure is needed to support model use? Will the model(s) be on-premise, cloud-based offerings via AI-as-a-service (AIaaS), software-as-a-service (SaaS), subscription, or via API, or so large a data center is required?
- Are GPUs required for processing, or is the model a smaller one that can run on a personal computer?
When these questions, and undoubtedly others, have been answered, the discussion of securing the production environment is the next point to address. No model offers the type and degree of built-in safeguards an organization needs to feel secure, especially if those models are large, public models, such as ChatGPT. Deploying multiple models, each with their own security features, across the enterprise leads to siloed visibility, when security teams need to be able to observe what’s happening with every model in real time.
Siloed security means there are gaps in the security infrastructure, and the addition of new tools, such as models, onto the infrastructure inevitably expands the attack surface. The solution is to envelop the system with an affordable, weightless trust layer that provides full observability of all tools in place. CalypsoAI’s model-agnostic LLM security solution, Moderator, provides that outer layer of security for single or multiple model systems, while also offering unique, granular security features within each model.
Prompts are reviewed by customized scanners that search for a wide variety of content that should not leave the organization, such as confidential, personal, or proprietary information, legal documentation, or source code; content intended to initiate prompt injection attacks; or content that violates acceptable use policies. Model responses are reviewed for content prohibited from entering the organization’s digital system, such as malicious code and content that violates acceptable use policies.
Policy-based access controls allow administrators to assign individuals and teams access to specific models, with rate limits, if required, and adjustable team-related content filters. Moderator’s Prompt History feature offers full traceability for usage and attribution, with each interaction with each model by each user retained for review and analysis. Prompt histories can be purged manually by administrators or automatically on an admin-determined cadence, or not saved at all.
The advantages of deploying LLMs and other GenAI models are being realized daily by those organizations that have taken the plunge into those deep, unfamiliar waters. The costs of not doing so are going to be felt in the near future by those that have not waded in. While there will always be both risks and benefits to consider when making that decision, the cost of deploying AI without securing it can tip the scales. Moderator can bring the scales back into balance, and help organizations build that bigger moat.
Click here to request a demonstration of Moderator.