Skip to main content

Digital sprawl is not a new phenomenon. Many, probably most, organizations have experienced it as they grow. Groups and departments add purpose-built applications, tools, and solutions without informing IT, and those tools don’t integrate into existing systems effectively. Sometimes IT teams deploy applications that are redundant in some aspects while other features go unused. And sometimes, it’s not the organization or its groups: the larger landscape is involved. 

The advent of the cloud for computing, storage, collaboration, and so much more, followed closely by the Covid pandemic, which precipitated the work-from-home stampede and, finally, the poor, if any, system governance that took place during those chaotic times certainly contributed to every organization’s explosive digital sprawl—specially when it came to security. 

At first glance, deploying multiple tools across an organization’s cybersecurity infrastructure might seem like a rational, risk-averse, comprehensive approach, but when discrete security tools and solutions proliferate across an infrastructure, they can introduce significant challenges and risks, such as: 

  • Solutions that operate in isolation, unable to communicate or share information with each other, and, therefore, cannot contribute to a holistic view of the organization’s security posture. This fragmentation hinders the ability to correlate and analyze security events and alerts, leading to delays in incident response, increased complexity in managing security incidents, and potentially missed threats.
  • Lack of integration and visibility, which makes it difficult to monitor and control data flows, resulting in blind spots where sensitive data may leak or be accessed without proper authorization.
  • Redundancy, which leads to conflicting or inconsistent security measures, wasted resources, increased complexity, and challenges to maintaining and updating security systems. 

Careful review followed by ruthless consolidation to streamline the security infrastructure is clearly one path to follow. But even that path has its pitfalls: When a new technology, such as generative artificial intelligence (GenAI), is introduced into an organization’s digital ecosystem, nothing in the existing security apparatus can help. 

The risks of deploying a large language model (LLM) or GenAI model across an organization are well documented: unintentional loss of intellectual property or proprietary, confidential, or sensitive data via poorly written queries; the introduction of bad or even malicious code via LLM responses that employees aren’t equipped to assess; the inadvertent dissemination of false or inaccurate information gleaned from an LLM’s response, but not verified; and so, so many others. 

It’s no surprise that new risks require new remedies. 

Those nascent remedies should include, at minimum, a few technical solutions, a governance framework, and ongoing monitoring, such as: 

  • Content moderation and filtering systems for prompt inputs and model responses to identify and block those containing malicious or otherwise inappropriate content
  • Ethical guidelines that align user behavior when engaging with the model to the organization’s values and standards
  • Fact-checking capabilities to verify information and flag inaccuracies or misinformation before it’s incorporated into company content
  • A comprehensive set of policies and procedures governing the ethical and responsible use of LLMs to ensure transparency, accountability, and adherence to company, industry, or regulatory requirements
  • Capabilities to track and audit engagement in terms of both users and content to identify vulnerabilities, assess risks, and implement appropriate safeguards.

Perhaps most importantly, such remedies would need to be customizable to an organization’s specific needs, risk profile, and regulatory environment, but these general suggestions provide a foundation for a proactive, holistic approach to security that can effectively mitigate risks associated with LLM deployment.