Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Reprinted from Cyber Defense Magazine

By Neil Serebryany, CEO and Founder of CalypsoAI

Generative artificial intelligence (GenAI) models, including large language models (LLMs) have been the focal point of the business world’s attention since ChatGPT made its debut just a year ago. They have revolutionized operational practices across sectors, from streamlining supply chains to enabling unique, detailed customer interactions. While not quite ubiquitous yet, this technology is getting closer to that milestone every day, and its potential for innovation is boundless. It’s clear these models and their other GenAI cousins are poised to reshape the corporate landscape even further. Here are some ways I anticipate they will do so in the upcoming year.

The first large-scale breach of a foundation model provider, such as OpenAI, Microsoft, Google, etc., will happen in the upcoming year and will lead to a large-scale security incident. 

The scope and scale of the attack itself will be on par with recent incidents, such as Microsoft’s “accidental” disclosure of 38 terabytes of private data and Google Fi’s hack that exposed the data of 38 million customers. With the amount of sensitive information that has been sent to LLMs like ChatGPT, the fallout would be profound and could easily exceed either of those in terms of reputational, operational, and financial damage. The damage inflicted by such a breach would not stop at the company boundaries, but would create a ripple effect across the AI ecosystem as organizations that had relied on the model(s) would need to immediately go into damage control mode. Abruptly ceasing to use the model(s) would affect applications that require it and security teams would have to investigate, reassess, and possibly recreate or replace elements of the organizational security infrastructure. Explaining their accountability to their own shareholders and customers would be a painful exercise for executives, and come with its own set of consequences.

An enterprise embracing GenAI is going to have a permissioning breach due to multiple models at play and a lack of access controls. 

As a company layers in external base models, such as ChatGPT, as well as models embedded in SaaS applications, and retrieval-augmented generation (RAG) models, the organizational attack surface expands, the security team’s ability to know what’s going on (observability) decreases, and the intense, perhaps even giddy, focus on increased productivity overshadows security concerns. Until, that is, a disgruntled project manager is given the access to the new proprietary accounting model that the payroll manager with a similar name requested. Depending on the level of disgruntlement and the personality involved, company payroll information could be shared in the next sotto voce rant at the coffee machine, in an ill-considered all-hands email, or as breaking news on a business news website. Or nothing will be shared and no one will notice the error until the payroll manager makes a second request for access. Whatever the channel or audience, or lack thereof, the company has experienced a serious breach of private, confidential, and highly personal data, and must address it rapidly and thoroughly. The AI security team’s days or weeks will be spent reviewing and likely overhauling the organization’s AI security infrastructure, at the very least, and the term “trust layer” will become a feature of their vocabulary.

Data science will become increasingly democratized thanks to foundation models (LLM usage).

The speed and power of LLMs to analyze and extract important insights from huge amounts of data, to simplify complex, time-consuming processes, and to develop scenarios and predict future trends has already begun to bring big-data analytics into the workflow of teams and departments in all business functions. That will continue to scale up dramatically. Across an organization, teams will increasingly be able to rapidly generate data streams tailored to their specific needs, which will streamline productivity and expand the institutional knowledge base. Humans will not be out of the loop, however, as I do not foresee models’ propensity to make stuff up being resolved any time soon, although fine-tuning is showing some benefits in that area.

Increasingly new and novel cyberattacks created by offensive fine-tuned LLMs like WormGPT and FraudGPT will occur. 

The ability to fine-tune specialized models quickly and with relative ease has been a boon to developers, including the criminal variety. Just as models can be trained on a specific collection of financial data, for instance, models can also be trained on a corpus of malware-focused data and be built with no guardrails, ethical boundaries, or limitations on criminal activity or intent. As natural language processing (NLP) models, these tools function as ChatGPT’s evil cousins, possessing the same capabilities for generating malicious code, as well as sophisticated content that easily passes for human-generated communication, such as phishing emails, social engineering attacks, and prompt injections or “jailbreak” attacks.

LLMs are nothing short of revolutionary tools with diverse applications and unlimited utility across industry sectors. As their adoption becomes more widespread, they stand to eclipse currently held notions of innovation and efficiency, and push the boundaries of the business ecosystem. The upcoming year could be just as interesting as this year has been.

 


About the Author

Four Ways Genai Will Change the Contours Of The Corporate Landscape In 2024Neil Serebryany is the CEO and Founder of CalypsoAI. My Name is the My Title of the My Company.  He has led industry-defining innovations throughout his career. Before founding CalypsoAI, Neil was one of the world’s youngest venture capital investors at Jump Investors. Neil has started and successfully managed several previous ventures and conducted reinforcement learning research at the University of South California. Neil has been awarded multiple patents in adversarial machine learning.  Neil can be reached online at https://www.linkedin.com/in/neil-serebryany/ and at our company website https://calypsoai.com/.