Skip to main content

Since its inception at–arguably–the start of the Industrial Revolution, the technosphere has done one thing exceptionally well and that is evolve. Generative AI (GenAI) is the latest brainchild in this rolling evolutionary exercise and it stands out as a beacon of both opportunity and challenge. It’s transforming how businesses operate, making it possible to enhance productivity and innovation like never before. However, integrating such advanced technology into the fabric of our enterprises isn’t without its risks. During a recent discussion about “The Art of Secure Innovation,” AI security experts Gary McGraw, Jim Routh, and Neil Serebryany provided valuable and entertaining insights into the security transformation enterprises must undertake.

Early on, Neil described the emerging business landscape as a “world of many, many machine learning models, many, many AI agents” and stressed the importance of enabling this transformation in a secure manner, as this proliferation of AI tools is set to redefine the workforce and the nature of work itself. Understanding the inherent risks of this transformative technology is critical to effectively managing its adoption and deployment. 

GenAI introduces complex, novel risks but, Gary noted, the risks are not monolithic; they vary significantly across social, enterprise, and line-of-business contexts and must be addressed separately. They must also be addressed early in the adoption phase to ensure a secure integration into business processes.

In addressing the societal impacts of GenAI, particularly its potential to disrupt current job markets, Jim drew parallels with past technological upheavals, such as the introduction of software to the enterprise several decades ago. He noted that while AI will inevitably replace some jobs, it also creates opportunities for new kinds of employment. The key is adaptation and learning, he noted; we cannot simply “set policies against the use of AI,” but must embrace and steer its capabilities.

The conversation focused strongly on the importance of governance in AI deployments, with Gary emphasizing that effective governance begins with observability: Knowing what Gen AI the company is using and how the company is using it across all levels. Neil discussed how policy-based access controls that ensure only authorized use of AI resources are essential for managing enterprise risk. Jim added that governance must be agile to keep pace with GenAI’s rapid development and the evolving regulatory landscape.

As the discussion wrapped up, the panelists shared their thoughts on preparing for the future. They agreed that, given the staggering pace of change in AI technology, it is vital for companies to establish robust frameworks for governance and risk management that enable end-to-end accountability and observability in AI systems.

This wide-ranging, insightful conversation makes it clear that while the path forward for GenAI is strewn with significant challenges, it also offers unparalleled opportunities for growth and innovation. For corporate decision-makers, the task ahead is to navigate this new landscape thoughtfully and securely, ensuring that AI technologies not only transform businesses, but do so in a way that aligns with organizational values and societal norms.

Watch the full conversation here

Click here to schedule a demonstration of our GenAI security and enablement platform.

Try our product for free here. 

Going to RSA? Click here to book a meeting with us.