Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Bracing for 2025

In this series of three Enterprise Risk Snapshots, we present AI security risk scenarios and solutions to help your organization brace for 2025.

References to the year 2025 have become a sort of touchstone for forward-thinkers in the artificial intelligence (AI) ecosphere. References to that year in connection with AI and related technological events and advances started to appear in articles just as ground-breaking generative models such as Stable Diffusion and ChatGPT appeared in late 2022. In January of 2023, the World Economic Forum (WEF) Global Cybersecurity Outlook Report stated 91% of the cyber leaders and cyber business leaders surveyed believe “a far-reaching, catastrophic cyber event” is likely to occur by 2025.(1) Less fearsome but still significant are predictions about negative socio-economic impacts, for instance that up to 300 million jobs could be affected by AI-driven advances.(2)

As adoption of generative AI (GenAI) models has grown steadily to the point of near-ubiquitous acceptance and utilization in many corners of the enterprise, the risks of their use have been overshadowed by the tremendous benefits they provide. However, as 2025 draws closer, organizations that want to stay ahead of the growing AI- and model-driven threats they face must do better and move faster.

Understand the Threat Landscape

The field of artificial intelligence (AI) is expanding rapidly in every direction, bringing productivity enhancements, increased innovation, and many other benefits to enterprises in every market sector. Unfortunately, that includes criminal enterprises, which have turned out to be among the most imaginative in their ability to weaponize the technology. This means companies that could be targeted have no choice but to keep up with, if not get ahead of, the threats.

Systematic Threats

The rapid adoption of large language models (LLMs) and evolution of natural language-to-code applications, such as copilots, within the AI ecosystem means models and entire applications can be generated by machines, as well as humans with little to no coding experience and, therefore, no understanding of how to test or validate their work. These AI-powered applications could potentially drive functions as diverse as virtual assistants on smartphones and automated control systems in industrial plants.

At all levels, we increasingly rely on AI models and tools to support our decision-making and/or make decisions on our behalf, and that very reliance is, in itself, an easy target for attackers, who are nothing if not opportunistic. A recent paper released by the UK government posits that, by 2025, “generative AI (GenAI) is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats.” It goes on to state that GenAI’s availability and ease of use can render “potentially anyone” into a threat actor, whether deliberately or inadvertently.(3)

Human Threats

While it’s important to keep in mind that an attacker is a just person with access to a keyboard and a computer, the entity behind the attack matters. And criminal entities run the gamut from untrained script kiddies messing around with an LLM, who easily create malicious code and other attack vectors and are definitely a threat(4), to well-funded, state-sponsored cybercrime consortia employing highly skilled practitioners who produce sophisticated means, methods, and malefactions that attack rapidly and stealthily with impressive accuracy and precision.(5) But whoever gets their AI-powered malicious code into your system, however they do it, and whether it moves laterally through your networks or rests comfortably in place undetected until it triggers, your security perimeter has been compromised and your organization is in deep trouble.

Prepare to Face External Threats

Every type and size organization is a target for external threats: physical infrastructure facilities that keep factories and cities functioning, financial institutions laden with personal information, corporate and academic entities rich in data and intellectual property, social networks rife with behavioral insights, media entities, and government bodies, specifically election-related bodies, at every level.

Cybercrime has become a huge industry and yet, despite this landscape rich with both opportunities and opportunists, and the fact that our dependence on tech has become so high that digital technology itself can be considered infrastructure, many organizations do not maintain their defenses. A recent survey of corporate executives presents a perplexing result:

  • 96% agree that adopting GenAI makes it “likely” that their organization will face a security breach by 2025
  • Approximately 50% agree adopting GenAI tools will expand their attack surface, and bring new kinds of attacks
  • 94% agree that securing AI tools prior to deployment is important
  • 69% put AI innovation ahead of AI security(6)

Such a casual defense posture is both untenable and unsustainable. Near-term projections indicate that 50% of the world’s data will be stored in the cloud by 2025(8) and one estimate indicates the number of “cloud-conscious” threat actors has increased by 288%.(9) It’s no surprise experts recommend organizations “put their entire security teams on a war footing.”(10)

While new attacks often represent the “known unknowns” AI security teams face, they aren’t necessarily sophisticated or hidden. Researchers at Northwestern University recently tested more than 200 custom GPT models using “simple prompts that don’t require specialized knowledge in prompt engineering or red-teaming.” Their success rate for file leakage was 100% and 97% for system prompt extraction.(11)

Protect Against Internal Threats

Even in the most stringently monitored environments, internal threats can happen anywhere at any time and are typically human-related; for instance:

  • A developer includes source code in a prompt to a popular and well-regarded copilot model with a request to review and revise it.
  • An executive assistant enters highly confidential notes from the CEO’s meeting with the General Counsel into ChatGPT with a request to format the notes into meeting minutes.

Each action exposes the organization to risk without the user realizing they’ve sent proprietary information to a third party whose security protocols are unknown. But an internal threat doesn’t even have to be that involved; one thoughtless click on a link or attachment that appears legitimate at first glance and, therefore, doesn’t get a second glance, can put the enterprise at significant risk. It doesn’t matter whether that one insider is careless, clueless, compromised, or corrupt: They can cause as much or more damage to a company as an attack by an outside perpetrator.

Another unfortunately common, as shown above, but equally pernicious internal threat is “lack of interest in, resistance to, and, sometimes, blatant dismissal of, cyber security concerns” within an organization.(12) A strong set of AI use policies, supported by a governance program that includes educating employees about the risks faced by the organization, can prove to be a significant element of any security perimeter.

Take Security Seriously

With new Securities and Exchange Commission (SEC) rules in place that mandate timely reporting of security incidents and that publicly traded firms have cybersecurity expertise within their leadership,(13) organizations must have a crystal clear understanding of the intricacies of their AI systems. Full observability at this level includes knowing not just the fortifications in place, but the vulnerabilities, including those hiding in plain sight. For instance, it’s not uncommon for individual teams—from Accounting to DevOps to Manufacturing—to deploy, monitor, and maintain the models and AI-assisted applications they routinely use, often without consulting other teams. When the teams tasked with protecting the system don’t know about tools in use on the system—or the “digital sprawl” that has taken place—they cannot secure the system and all it takes to bring it down, as noted above, is one mistake.

If decentralized implementation decisions, in which each team or business unit makes its own technology decisions and polices itself, are an organizational norm, they must be balanced by adherence to governance policies requiring continually updated audit trails that enable the AI, IT, and cyber security teams to know what’s happening in every part of the network.

Organizations might also consider building resilience into the system early by adopting a “shift left” approach. Initiating security actions earlier in the process of onboarding new AI tools, for instance, well before deployment or even during review and acquisition processes can help ensure security is top of mind at all levels.

Our model-agnostic GenAI security and enablement solution resides outside and immediately adjacent to an organization’s security infrastructure, enveloping it in a weightless, invisible trust layer that provides security teams full visibility into the AI tools on the system and their usage. Its unique spectrum of tools, including policy-based access controls, customizable content scanners and blockers, auditing capabilities, and longitudinal usage analytics, allows business unit leaders to gain deep insights into user behavior, as well as model performance.

While procrastination and hesitancy are standard parts of the corporate praxis, it’s also well understood that post-attack due diligence will never be a viable substitute for preemptive security measures, and explaining to your employees and shareholders how the organization failed to protect them will never feel as good as announcing how the organization successfully thwarted an attack.

Click here to request a demonstration. 

________________________________

  1. “Global Cybersecurity Outlook 2023 Insight Report,” World Economic Forum (January 2023) https://www3.weforum.org/docs/WEF_Global_Security_Outlook_Report_2023.pdf
  2. “Generative AI could raise global GDP by 7%,” Goldman Sachs (April 4, 2023) https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
  3. Safety and security risks of generative artificial intelligence to 2025 (Annex B) (October 25, 2023) https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/safety-and-security-risks-of-generative-artificial-intelligence-to-2025-annex-b#:~:text=The%20most%20significant%20risks%20that,phishing%20methods%20or%20replicating%20malware
  4. Eugenia Lostri, “How Cyber Criminals Can Exploit ChatGPT,” Lawfare Blog (February 13, 2023) https://www.lawfareblog.com/lawfare-podcast-how-cyber-criminals-can-exploit-chatgpt
  5. Ibid.
  6. Chris McCurdy, “C-suite weighs in on generative AI and security,” Security Intelligence (October 10, 2023) https://securityintelligence.com/posts/c-suite-weighs-generative-ai-security/
  7. Lostri
  8. Steve Morgan, “Top 10 Cybersecurity Predictions and Statistics for 2023, CyberCrime Magazine (December 10, 2022) https://cybersecurityventures.com/top-5-cybersecurity-facts-figures-predictions-and-statistics-for-2021-to-2025/
  9. “2023 Global Threat Report,” Crowdstrike, https://www.crowdstrike.com/global-threat-report/
  10. “2022 Cyber Attack Trends: Mid-Year Report,” Check Point Research, https://go.checkpoint.com/2022-mid-year-trends/?mkt_tok=NzUwLURRSC01MjgAAAGJZV_xxBKoK-6Az17TfnHz_Q3UrSjxWugGaTKprdoORbAe7M-C9vmihuZ2i4RCTHj4AtUDE89Z2JNHLAmPP1Smz1rIQrSExi8n24CfuqvmaL0GQALE
  11. Matt Burgess, “OpenAI’s Custom Chatbots Are Leaking Their Secrets,” Wired (November 29, 2023) https://www.wired.com/story/openai-custom-chatbots-gpts-prompt-injection-attacks/
  12. Shira Landau, “The biggest CISO challenge in a new role (it’s not what you think),” CyberTalk.org (December 14, 2023) https://www.cybertalk.org/2023/12/05/the-biggest-ciso-challenge-in-a-new-role-its-not-what-you-think/?utm_source=newsletter&utm_medium=email&utm_campaign=cm_eb_23q4_ww_corporate-newsletter-20231220&mkt_tok=NzUwLURRSC01MjgAAAGQJo2rvQY_BSpI6pKbbCasmb91l3Yepphb-BBKxXON6ZgxmUo5-bfo0XjkxHY9aVRIR_AxncJgC__EOWeOHZiA405ba-U0h_oCfCx_HPc2O6e5HzvZ
  13. Cybersecurity Disclosure, U.S. Securities and Exchange Commission (December 14, 2023) https://www.sec.gov/news/statement/gerding-cybersecurity-disclosure-20231214#:~:text=To%20help%20investors%20evaluate%20this,cybersecurity%20risk%20management%2C%20strategy%2C%20and