Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Bracing for 2025

In this series of three Enterprise Risk Snapshots, we present AI security risk scenarios and solutions to help your organization brace for 2025. 

References to the year 2025 have become a sort of touchstone for forward-thinkers in the artificial intelligence (AI) ecosphere. References to that year in connection with AI and related technological events and advances started to appear in articles just as ground-breaking generative models such as Stable Diffusion and ChatGPT appeared in late 2022. In January of 2023, the World Economic Forum (WEF) Global Cybersecurity Outlook Report stated 91% of the cyber leaders and cyber business leaders surveyed believe “a far-reaching, catastrophic cyber event” is likely to occur by 2025.(1) Less fearsome but still significant are predictions about negative socio-economic impacts, for instance that up to 300 million jobs could be affected by AI-driven advances.(2)

As adoption of generative AI (GenAI) models has grown steadily to the point of near-ubiquitous acceptance and utilization in many corners of the enterprise, the risks of their use have been overshadowed by the tremendous benefits they provide. However, as 2025 draws closer, organizations that want to stay ahead of the growing AI- and model-driven threats they face must do better and move faster. 

Acknowledge Your Organization’s AI Security Preparedness

Having a solid, fact-based understanding of where your organization is now on the AI security preparedness spectrum is the first step toward creating a secure AI system that will lead you safely into the future. That sounds fairly basic, but the truth is that occupants of boards and C-suites and other senior executives rarely have a complete picture of the AI deployed in their organization, including what it is being used for, who is responsible for maintaining it, and how it conforms to regulatory compliance requirements. Consider the following:

  • Only 65% of Chief Information Security Officers (CISOs) surveyed by Heidrick & Struggles in 2023 agreed that cybersecurity was included in their company’s business strategy and only 59% said they had adequate funding to build a security program suited to their enterprise.(3) 
  • 78% of CISOs and 71% of Application Security (AppSec) teams responding to an ASPM survey described today’s attack surface as “unmanageable.”(4) 
  • At the end of 2021, there were sixsix—regulations or industry standards that addressed data privacy issues in the U.S. At the end of 2023, there were an additional 17, and that number does not include foreign regulations, such as the EU AI Act or General Data Protection Regulation (GDPR), that apply to U.S. companies doing business overseas.(5)

In an ideal world, statistics like these would spur action at an organization’s highest levels, with the first being the organization’s acknowledgement that a “perception gap” (6) exists among decision-makers, and the second being a shift in investment strategies to focus on strengthening day-to-day defenses (7) by integrating solutions that:

  • Are flexible, robust, reliable, scalable, trustworthy, and easily and rapidly deployed
  • Provide critical capabilities for identifying and thwarting vulnerabilities
  • Ensure the organization does not imperil itself by failing to comply with regulations and industry standards    

Understand the Risks 

A recent CalypsoAI Security Market Survey queried U.S.-based IT security leaders in organizations across different market verticals about their AI security position. While 80% of respondents considered threat detection ‘important’ or ‘very important,’ only 37% were ‘extremely concerned’ about their models being compromised by external forces/actors now. Only 26% said they’d consider deploying solutions in the short term, while 42% indicated they’d be significantly more inclined to do so 12 months from nowon the eve of 2025—despite the rapidly expanding attack surface, increasing threat vectors and bad actors, and the growing presence of large language models (LLMs) and other GenAI models in the AI landscape. 

The latter, particularly, adds another layer of both urgency and vulnerabilities to enterprise-level security considerations. New models, including open-source LLMs, retrieval-augmented generation (RAG) models, fine-tuned, proprietary models, and Software as a Service (SaaS) applications incorporating LLMs, are being released with increasing rapidity and that trend will persist. User interfaces continue to simplify and the knowledge bases relied on by the LLMs continue to grow. This means organizations are already facing an exponentially expanded attack surface in several significant ways: 

  • Amateurs with no bad intent, but little or no technical experience, can generate application code that is often rife with vulnerabilities left unidentified due to a lack of knowledgeable oversight, but shared nonetheless. 
  • Nefarious actors, who also have few or  no technical skills and who, until now, didn’t have the means to easily attack the cyber realm, can rely on models to generate malicious code for them. 
  • The models themselves are not only generative, they are inherently unconstrained by the limits of the human imagination or conscience. They seek patterns without preconceived notions and devise original solutions,(8) which means threat actors can direct LLMs programmatically to generate random, original corruptions at speed, leaving AI security protocols unable to anticipate such attacks and struggling to address them when they occur. 

Threat actors aren’t going to wait while companies add cybersecurity to an executive agenda that already includes discussions of supply chain issues, inflation, hiring freezes, layoffs, and budgets; they are going to act with fast, flagrant impunity and leave organizations reeling. 

But that scenario doesn’t have to be the one your organization faces. Companies that want continuing AI technological advances to work for them, rather than against them, can ensure their AI security programs keep pace with the current and impending threats in this still-emergent AI-dependent ecosystem. 

Build Resilience

In a perfectly cybersecure world, remediation and recovery plans would remain plans and never need to be deployed, much less deployed in a state of panic. But, whether a defensive capability failed to identify an attack surface or threat vector, existing security protocols grew stale, or known vulnerabilities went unremediated, the reason an attack happened isn’t the issue when your organization has been hit. The immediate focus is recovering from the hit and then preventing the next one. 

Complex problems, such as not knowing where, when, or how a threat actor will target your organization, often require complex solutions. But it would be a serious mistake to overlook simple “housekeeping” as a starting point, In this new era, in which AI-reliant and AI-adjacent technologies are being adopted in a quiet, hiding-in-plain-sight version of scope creep or “digital sprawl” in many organizations, the logical, most basic first step would be to ensure every AI system or technology deployed across the organization is identified and cataloged, and new ones are not added without being vetted and approved by security leadership. In other words, organizations must prioritize establishing deep, all-encompassing observability across the security infrastructure, so the AI security team can see and monitor all activity in real time.

Achieving and maintaining such critical baseline goals using traditional security infrastructure tools would easily overstretch the human and financial resources of in-house cyber, IT, or AI security teams that have myriad other day-to-day responsibilities. But the biggest mistake a company’s leadership could make after identifying the surface area of their AI ecosystem would be to throw their hands in the air and say there’s no time/money/personnel to maintain or defend it. That would be like opening the cage door and listening to the canary sing “good-bye and good luck” as it flies out of the mine. But there is a solution that can change that tune. 

Prepare for the Future

CalypsoAI is the only GenAI security and enablement solution to provide enterprise-wide observability, as well as detailed user insights, to organizations deploying GenAI models. This easily integrated, model-agnostic, “weightless” trust layer includes customizable scanners that ensure proprietary and confidential data, operational secrets, source code, legal documents, and other sensitive information remain in-house and malicious code and other external attacks never make it into the system. Audit scanners help identify internal issues and threats, and retention of every user prompt and model response ensures full tracking and auditability. Policy-based access controls ensure segmented protections at the individual and group levels, with all results accessible via interactive dashboards.  

After incoming and outgoing content is secured retained and full observability and accountability are established, the risks facing the organization can begin to be understood, and the next set of tasks can begin: Ensuring each LLM or GenAI model is being used appropriately by establishing acceptable use and related policies as part of a strong governance framework, and creating a review and maintenance cycle that ensures every model remains fit for purpose and does not outlive its usefulness or become stale.

 A strong offense is usually a great defense, and preparing your organization to face the known unknowns that 2025 will bring is a task that cannot be started too soon. 

        

Click here to request a demonstration. 

__________________________________

  1. “Global Cybersecurity Outlook 2023 Insight Report,” World Economic Forum (January 2023)  https://www3.weforum.org/docs/WEF_Global_Security_Outlook_Report_2023.pdf  
  2. “Generative AI could raise global GDP by 7%,” Goldman Sachs (April 4, 2023) https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
  3. “2023 Global Chief Information Security Officer (CISO) Survey,” Heidrick & Struggles, https://www.heidrick.com/-/media/heidrickcom/publications-and-reports/2023-global-chief-information-security-officer-survey.pdf 
  4. “The State of ASPM 2024,” Cycode, https://cycode.com/thank-you-page/state-of-aspm-2024/ 
  5. Ibid. 
  6. “Global Cybersecurity Outlook 2022,” (panel discussion, World Economic Forum Annual Meeting, Davos, Switzerland, December 2022) https://www.youtube.com/watch?v=Q-mVYahIKzI 
  7. WEF Global Cybersecurity Outlook 2023 Insight Report 
  8. See “DeepMind, AlphaGo: The Challenge Match,” DeepMind (March 2016)  https://www.deepmind.com/research/highlighted-research/alphago/the-challenge-match