Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Generative AI (GenAI) is a groundbreaking innovation that has already begun to transform the way we work. Like previous disruptive technologies—web apps, mobile apps, SaaS, blockchain, IoT, and augmented reality—GenAI is driving excitement and optimism. However, it also introduces new challenges and risks that organizations must address to create a resilient culture when integrating this technology into their operations.

The GenAI landscape has shifted dramatically in just one year. Initially, the focus was on large, publicly hosted large language models (LLMs), whereas now it is on smaller, proprietary LLMs embedded in applications and open-source software. The risks have changed, too: The primary risk is no longer information leakage into public LLMs, but the rise of inaccurate information or “hallucinations” that can compromise business processes dependent on accurate data.

The initial thrill of using GenAI and its seemingly ubiquitous uses often lead to a belief that it can be deployed universally within enterprises. However, as with any disruptive technology, the integration of GenAI must be approached with caution, recognizing and managing the associated risks. GenAI’s multimodal nature, encompassing text, images, and audio, presents unique challenges and opportunities.

  • Organizations wanting to effectively harness GenAI’s full potential must establish a robust governance model, which requires balancing technological capabilities with the controls needed to meet regulatory and customer expectations. This process begins with the creation of a cross-functional team of decision-makers led by someone with strong facilitation skills. The Chief Information Security Officer (CISO) is ideally positioned to lead this team, given the strategic importance of securing GenAI. The rest of the team should comprise representatives from diverse disciplines, including Operations, Legal/Privacy, IT Architecture, Cybersecurity, Customer Communications, Human Resources, and Finance, to ensure many perspectives are considered when devising comprehensive and sustainable governance policies and practices.
  • Creating a set of guiding principles is one of the team’s foundational steps. These principles provide high-level guidance for identifying and prioritizing potential use cases. For example, principles might include commitments not to use LLMs for processing sensitive customer information and to leverage LLM capabilities to enhance employee experiences by automating repetitive tasks. Accountability for the resilience of GenAI-dependent systems should rest with system designers and implementers.
  • After principles are established, the team must develop policies addressing GenAI deployment and use, such as a mandate that project teams submit detailed use case descriptions for review through a GenAI governance intake process to ensure use cases are evaluated for necessary controls and potential risks prior to development. Lessons learned from implemented use cases can then inform future projects, enabling continuous improvement in governance practices.
  • The drive toward resilience also requires control capabilities to evolve alongside GenAI technologies. Observability and traceability are critical components of this evolution. Enterprises must capture log files showing how LLMs are accessed and used, integrating this data into existing security platforms, which enables enterprises to understand and enforce access accountability and align with evolving regulatory requirements. Vendor interactions also require oversight. Enterprises must ensure that third-party governance controls are adequate for GenAI, particularly when using SaaS applications. Identifying LLM usage within these applications and ensuring traceability are key to building effective controls.
  • Software development teams, too, must adapt their pipelines to manage the risks associated with integrating LLMs by incorporating tools that identify software components using LLMs and provide observability into the build process. Developers must also have access to instrumentation that checks for security vulnerabilities to ensure the integrity of builds.

Integrating GenAI into enterprise operations requires a comprehensive governance framework that evolves with the technology. By establishing robust principles, policies, and control mechanisms, organizations can develop a culture of resilience, ensuring that they capitalize on GenAI’s potential while mitigating associated risks. As the use of public LLMs continues to give way to proprietary models, the emphasis on security and resilience will become increasingly critical for enterprises seeking to innovate safely and sustainably.

 

For more detailed information about the critical need for robust governance to ensure resilience in the face of GenAI risks, click here to download Governing Creativity Securely: Navigating the Intersection of GenAI and Cyber.

Click here to schedule a demonstration of our GenAI security and enablement platform.

Try our product for free for a limited time here.