Reprinted from CIO Influence Magazine
Emerging artificial intelligence (AI) and machine learning (ML) tools are gaining swift adoption by organizations and employees because they are increasingly shown to improve productivity and enhance efficiency by streamlining labor-intensive processes, and to boost innovation and creativity. A recent S&P Global survey of 1,500 decision-makers at large companies found that 69% have at least one AI/ML project in production, with 28% having reached enterprise scale with the project “widely implemented and driving significant business value;” 31% have projects in pilot or proof-of-concept stages. The McKinsey Institute predicts the annual economic value produced by generative AI (GenAI) tools globally will be $4.4 trillion.
It’s safe to say AI is a big topic in the boardroom.
And, here on the ground, we see evidence that these numbers are spot on, with companies considering GenAI usage from one of three distinct mindsets:
- Not now, maybe never: Blocking it until they can learn more
- Yes now, but slowly: Engaging proactively by establishing deployment plans supported by policies, use cases, funding, etc.
- I guess we’re using AI now: Stumbling into it with accidental, bottom-up employee usage driving wider adoption
- Poor access controls, so team-specific or user-specific access to an LLM cannot be granted
- Poor governance and non-compliance with data privacy regulations, such as GDPR, CCPA, and HIPAA, as well as company policies
- No tracking, auditing, monitoring, or oversight capabilities for admins; prompt/response history is limited to the user; there are no metrics to show cost or usage, or if the model is returning accurate or factual information
- Data leakage when confidential information is included in prompts
- Malicious code in LLM responses, such as malware, viruses, phishing attempts, or spyware
- No transparency/explainability, as model operations are “black boxes”