Reprinted from VMBlog
By Neil Serebryany, Founder and CEO of CalypsoAI
Large language models (LLMs) have taken center stage since the introduction of ChatGPT one year ago. They have reconfigured the business landscape and introduced operational efficiencies across industries, from optimizing supply chains to providing personalized customer experiences. These models are quickly becoming ubiquitous and their potential seems close to infinite. They will continue to change the enterprise landscape over time. Here are a few ways they will change it in the coming year.
- Data science will become increasingly democratized thanks to foundation models (LLM usage). Advanced degrees and finely-honed research and analytical skills have already become less important in the workplace as LLMs have entered the workspaces of data-driven departments and business units. Across organizations, teams are using LLMs to analyze and manipulate large volumes of data to develop novel scenarios and solutions, to simplify complicated, onerous tasks, and to gather insights that would otherwise be unavailable to them. Productivity, innovation, and results are enhanced while the users themselves learn new skills that help propel the business forward. Analytical skills will remain critical, however. As users become comfortable and even complacent about using model-generated content, the need for human review and verification of the model-generated content will grow, rather than diminish.
- The typical enterprise will deal with more than 50 models on a routine basis (and some companies will have hundreds of models in use across their enterprise). The industry’s recent hat trick-the near simultaneous appearance in the AI ecosystem of models as SaaS plugins, retrieval-augmented generation (RAG) models, and fine-tuned, internal models-has served to super-charge the adoption of LLMs across the business landscape. The time, talent, and tokens needed to create LLMs has decreased dramatically, enabling companies of any size to develop their own proprietary models with relative ease or deploy commercially developed small language models (SMLs), such as Microsoft’s Orca2 or Google’s BERT Mini, which are proliferating across the marketplace. This explosive expansion of model use will bring with it an expanded attack surface, which will lead to heavy demand for trust layer solutions.
- We’ll see more and more enterprise use cases for LLMs. One use case that will be a significant game-changer is a much-diminished need for data labeling in a world in which LLMs can be used to label data. One recent estimate is that LLMs can label data 100x faster than humans, which is an awesome improvement, and even though subject matter experts must remain in the loop to provide oversight, the downstream cost savings are set to be extraordinary. The retail and financial services industries have been leaders in the field, continually developing use cases that enable them to expand their AI-driven customer engagement capabilities in the front of the house and deploy powerful data-crunching models in the back of the house. Pharma is developing novel use cases for LLMs across all sectors and niches, from diagnostic tools to epidemiological forecasting to predictive design at the cellular and molecular levels to patient monitoring and interactions. And, when industries not typically considered high-tech, such as urban planning, construction, or agriculture, are added to the list, the opportunities truly are endless.
- We’ll see a model with a one million token context window developed, which will enable the next generation of extremely high context tasks to be executed and operationalized via LLM. Currently the most advanced models like GPT 4 Turbo have 100,0000 token context windows. An example of this expansion would be the difference between the model being able to hold 50 single-spaced pages of text in its memory for one interaction, or conversation, and being able to hold 1,500 pages. This quantum leap will affect everything, from enabling evermore personalized digital avatars to be created and implemented to enabling models to perform increasingly sophisticated tasks that have typically required a very context-skilled worker to accomplish.
LLMs have moved far beyond being technological curiosities or “tech flexes”; they are revolutionary tools with diverse applications and unlimited utility across industry sectors. As their adoption becomes more widespread, they stand to eclipse currently held notions of innovation and efficiency.
##
ABOUT THE AUTHOR
Neil Serebryany is the founder and Chief Executive Officer of CalypsoAI. Neil has led industry-defining innovations throughout his career. Before founding CalypsoAI, Neil was one of the world’s youngest venture capital investors at Jump Investors. Neil has started and successfully managed several previous ventures and conducted reinforcement learning research at the University of South California. Neil has been awarded multiple patents in adversarial machine learning.