Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Reprinted from Global Banking and Finance Review

By Neil Serebryany, Founder and CEO, CalypsoAI

 

Deploying generative artificial intelligence and large language models across the enterprise presents opportunities for both increased productivity and innovation, but also creates challenges for managing organizational risk.

Where We’ve Been 

Approximately 10 years ago, artificial intelligence (AI)-dependent tools became an established feature in the business landscape. Financial organizations were among the earliest and most enthusiastic adopters, leading the development and deployment of innovative, productivity- and revenue-driven AI solutions for issues that have long plagued the field, such as anticipating market trends and fluctuations, ensuring compliance in a dynamic regulatory environment, improving client support services, and deterring fraud. More recent challenges include multi-channel customer options and collecting and analyzing data on consumer behaviors and preferences.

Where We Are

The latest additions to the AI toolbox are generative AI (GenAI) and large language models (LLMs), which are being deployed throughout the enterprise for tasks as diverse as assessing risk exposure, developing investment strategies, improving competitive advantage, and generating and executing narrowly targeted marketing campaigns. Across the banking, insurance, and financial services industries, these models are increasing productivity, driving innovation, ensuring compliance, and reducing fraudulent activity. However, operational and security challenges balance these large productivity gains.

Organizations adopting these models must acknowledge and prepare for the new risk layer that accompanies–and can disrupt–successful model adoption and deployment. The list below identifies top considerations that accompany LLM deployments and ideas for addressing them.

Cost Controls

The cost of model deployment can vary dramatically and includes variables that must be carefully considered before being decided upon. At its most basic, the spend is a cost vs performance issue. LLM providers’ fees are typically a cost-per-thousand-token calculus, with a token being the equivalent of around three-quarters of a word. The amount of information the LLM can process in a query (the model’s “context window”) and the model’s performance characteristics also affect pricing. Models with longer context windows can provide responses of greater depth and nuance, but increase the compute spend.

Observability and Visibility

The typical enterprise deploys multiple models, including multimodal GenAI models that use voice and images, that operate in parallel, leaving security teams with a fragmented view of the overall system when what they need is one tool that provides full visibility into and across all models in use.

The solution is to deploy an automated tool spanning all models in use, providing observability at a per-model, system-wide, level. When able to audit and secure models, administrators can leverage insights about usage and user behavior, enhance and streamline decision-making processes, overcome inherent limitations, and provide stability, reliability, efficiency, and added security across the organization.

Data Security

Both fine-tuned LLMs and retrieval-augmented generation (RAG) models access large amounts of proprietary data. They must be protected from accidental, as well as deliberate, data leakage. The most common data leak is an unintended exposure via a user query to the model, such as an internal memo containing detailed information regarding a merger or acquisition under consideration sent to the LLM to improve the structure and verbiage more professional. The security issues here are threefold:

  • The highly confidential information in the prompt is sent outside the organization, which is an unauthorized release.
  • The information becomes the property of a third-party—the model provider—that should not have access to the information and that may or may not have strong security protocols to prevent a data breach.
  • The information, now the property of the third party, could be incorporated into the dataset used to train the next iteration of the model, meaning that the data could be made available to anyone—such as a competitor—querying that model with a prompt crafted to find such data.

The solution is also threefold:

  • Employees and other users must be educated as to the risks posed by model use and trained to use the models properly.
  • AI security policies must describe appropriate and inappropriate use of the models and align to organizational values and industry regulations.
  • Model usage by individual users must be traceable and auditable.

AI Security

“AI security” is used more and more frequently, but often without explanation. The term refers to the strategic implementation of robust measures and policies to protect an organization’s AI systems, data, and operations from unauthorized access, tampering, malicious attacks, and other digital threats. It goes well beyond traditional cybersecurity because every AI-driven or AI-dependent component linked to an organization’s digital infrastructure adds to the sprawl of pathways into the system.

While many technical solutions exist to address technological vulnerabilities, an organization’s most commonly exploited vulnerability is the user who, as mentioned above, inadvertently includes sensitive information in a prompt or acts on information received in a response unaware that it’s malware, a hallucination, a phishing campaign, or a social engineering attempt. An insider who takes deliberate actions, for instance, by trying to outwit security features via prompt injection or “jailbreak,” not realizing they could put the organization at risk, is another unfortunately common threat vector.

The solution is to apply strong filters on outgoing and incoming channels to identify content that is suspicious, malicious, or otherwise misaligned with organizational policies and industry standards.

Where We’re Going

The challenges of deploying AI systems, specifically Gen AI and LLM systems, are growing in number and sophistication on par with the number and sophistication of the models themselves. The risks presented by poor or incomplete adoption and deployment plans are also expanding in scope, scale, and nuance. This is why identifying, evaluating, and managing every potential risk is vital for maintaining the models’ integrity, security, and reliability, as well as the organization’s reputation and competitive advantage.

While having a strong AI security strategy and deployment plan, which includes employee education and training about the role they play in mitigating risk, are very important, incorporating the best tools for the situation is also critical to ensuring a safe, secure adoption and rollout. A “weightless” trust layer built into the security infrastructure, which enables full observability into and across all models and detailed insights about their use, is the ideal. When system and security administrators can see who is doing what, how often, and with which models, they are afforded both wide and deep user and system insights that support a strong, stable, transparent deployment posture.


Picture102202024 - Global Banking | Finance

About Author:

Neil Serebryany is the founder and Chief Executive Officer of CalypsoAI. Neil has led industry-defining innovations throughout his career. Before founding CalypsoAI, Neil was one of the world’s youngest venture capital investors at Jump Investors. Neil has started and successfully managed several previous ventures and conducted reinforcement learning research at the University of South California. Neil has been awarded multiple patents in adversarial machine learning.