Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Bracing for 2025

In this series of three Enterprise Risk Snapshots, we present AI security risk scenarios and solutions to help your organization brace for 2025.

References to the year 2025 have become a sort of touchstone for forward-thinkers in the artificial intelligence (AI) ecosphere. References to that year in connection with AI and related technological events and advances started to appear in articles just as ground-breaking generative models such as Stable Diffusion and ChatGPT appeared in late 2022. In January of 2023, the World Economic Forum (WEF) Global Cybersecurity Outlook Report stated 91% of the cyber leaders and cyber business leaders surveyed believe “a far-reaching, catastrophic cyber event” is likely to occur by 2025.(1) Less fearsome but still significant are predictions about negative socio-economic impacts, for instance that up to 300 million jobs could be affected by AI-driven advances.(2) 

As adoption of generative AI (GenAI) models has grown steadily to the point of near-ubiquitous acceptance and utilization in many corners of the enterprise, the risks of their use have been overshadowed by the tremendous benefits they provide. However, as 2025 draws closer, organizations that want to stay ahead of the growing AI- and model-driven threats they face must do better and move faster. 

Increased AI Innovation Brings Increased Risk  

Advances in the artificial intelligence (AI) ecosystem, specifically large language models (LLMs), are accelerating with countless actors using foundation models to create unique and diverse apps that appeal to distinct, sometimes niche, audiences. The opportunities for innovation are limitless, including, unfortunately, at the attack surface horizon. 

That’s because the model and app creators harbor a wide array of intentions—from the saintly (3), to the useful (4), to the criminal (5)—that can precipitate very different outcomes. This creates the imperative that both creators and users recognize the potential effects of what has been released into the wild because every standard partition of concern—hardware, software, the cloud, data, and even data in transit—represents a potential vulnerability. 

In its final report, released in March 2021, the National Security Commission on Artificial Intelligence (NSCAI) stated that “[b]y 2025, the foundations for widespread integration of AI across [the U.S. Department of Defense] must be in place,” (6) in terms of both defensive and offensive capabilities. It’s unfortunate that, several years later, the commercial sector still hasn’t established an external expert commission, or even a voluntary oversight entity. Although the National Institute of Standards and Technology (NIST) (7) and the Biden administration (8) have established guidelines, as it stands, each industry and each organization can choose how, when, and whether to establish guardrails in its AI-dependent tools and its ecosystem. 

Foundation model creators, such as OpenAI, Google, Anthropic, and others, came to realize rather quickly that shifting left to build security measures into their products was not only a necessity, but a competitive advantage. While user organizations, from large multinationals to small- and medium-sized enterprises, understood early on that being able to trust the results from LLMs and other AI-dependent systems and identify with the ethics behind their creative intention was a competitive advantage, it has taken most considerably longer to realize they could not rely on the model providers to make that happen. Each organization’s security needs are as unique as its use cases and no model provider could offer customizable options at any scale. By that time, with model usage beginning to approach the edge of ubiquity, organizations across the enterprise faced with varying degrees of panic the decision to build or to buy the solutions needed to protect and defend their environments.  

Protecting Your Models  

Understanding individual risks, even if there are a lot of them, is one thing. Understanding how they work together to either exacerbate or mitigate the overall risk facing an organization is quite another, and that is the point at which AI security teams begin to gain visibility into the scope of their task. Treating an organization as a single multi-tiered ecosystem, but deploying different methodologies and protocols to protect individual layers or components is one approach to digital security. However, an ideal system would be a single trust layer that includes preemptive capabilities to achieve the twin goals of deterrence and protection, while also dismantling bad actors’ ability to achieve their goals easily or at all. That system design would likely include governance tools, such as:  

  • Observability into and across every AI model in use across the organization, including their reliability, relevance, and usage; our trust layer security and enablement solution provides full visibility, enabling security teams to understand what’s happening and how to respond  
  • Prevention over detection: Protective awareness measures, such as authentication and access protocols, patches, logs, and audits, must be deployed and reviewed on a continuous, not episodic, basis; our solution enables policy-based access controls (PBAC) to be assigned at the individual and group level, and retains a complete record of every interaction with each model to enable full traceability and attribution 
  • Real-time threat detection that identifies common traffic issues, such as phishing attempts, suspicious IP addresses, known malware, etc., as well as more advanced attacks, such as prompt injection attempts; advanced, customizable scanners allow our platform to review and take action on outgoing prompts and incoming responses to ensure private information doesn’t leave the organization and malicious content doesn’t enter it 
  • Real-time blocking for potential threats until they can be triaged and identified, and instant notifications regarding suspicious activity that doesn’t rise to the level of a threat 
  • A workforce educated about AI/ML security and cybersecurity risks, responses, and remedies 
  • In-house rapid-response teams trained using response, remediation, and recovery plans, including table-top exercises, that are reviewed and updated regularly, and can be deployed instantly  

Security Must Meet the Expanding Attack Surface  

The potential innovations models present to users are matched only by the innovative architectures and utilities developers are devising for the models themselves, with large, small, multimodal, targeted, open-source, and other types of models being developed and released for uses that are known, planned, speculative, or visionary. And the speed of advances will only increase. The workforce will adapt to continual changes, as has been shown by current rates of adoption: a recent survey found that 67% of respondents said their companies were using GenAI, 41% for more than a year and 26% for less than one year, 16% are working with open-source models, and 18% have applications in production.(9)  

With every model in an organization’s infrastructure contributing to the expansion of the attack surface, system safeguards must be as adaptable to forthcoming changes as the workforce must be. This is especially true when even industry experts don’t know exactly what the tech tool will look like, or what the next threat will look like. We are in the thick of an era of innovation and automation, continual change and understandable hesitancy. And because of this, adaptable system safeguards aren’t just a “nice-to-have” in an AI-driven world; they’re a necessity to secure the future you’re helping to shape.

Click here to request a demonstration. 

____________________

  1. “Global Cybersecurity Outlook 2023 Insight Report,” World Economic Forum (January 2023)  https://www3.weforum.org/docs/WEF_Global_Security_Outlook_Report_2023.pdf  
  2. Generative AI could raise global GDP by 7%,” Goldman Sachs (April 4, 2023) https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
  3.  Painted Saintly, https://www.paintedsaintly.com/ 
  4. “Large Language Model Plugins,” GPTBOTS.AI  (Oct 11, 2023)  https://www.gptbots.ai/blog/large-language-model-plugins 
  5. Michael Atleson, Chatbots, deepfakes, and voice clones: AI deception for sale, Federal Trade Commission Business Blog (March 20, 2023) https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale 
  6.  “Final Report”, National Security Commission on Artificial Intelligence March 18, 2021 https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf 
  7. National Institute of Standards and Technology Risk Management Framework (December 13, 2023) https://csrc.nist.gov/projects/risk-management/about-rmf 
  8. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 30, 2024), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/   
  9. Mike Loukides, “Generative AI in the Enterprise,” O’Reilly (November 28, 2023)  https://www.oreilly.com/radar/generative-ai-in-the-enterprise/