We applaud the Biden administration’s recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, particularly the emphasis on ensuring the cybersecurity of models. This type of strong government support and guidance is key to ensuring the security of AI-driven tools and systems.
Two provisions of the executive order are particularly relevant.
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
The first is the directive to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. Users need to know the AI systems they are using are secure, so building in security solutions before use is important, as is providing security at the point of inference. Securing the usage of these models means organizations will be protected in real time from threat incursions, regardless of the threat’s nature or origin.
This is why we at CalypsoAI strongly urge the Biden administration to reinforce cybersecurity measures surrounding the utilization of foundation models, including large language models (LLMs), by encouraging the organizations deploying them to aggressively address critical security considerations. This external approach is increasingly important on a global scale as multinational and international organizations:
- Adopt SaaS applications that have models embedded within them.
- Seek to integrate a rapidly expanding array of models into their enterprise as fine-tuned, internal models.
Order the development of a National Security Memorandum that directs further actions on AI and security.
The second highly relevant provision of the executive order is the instruction to develop a National Security Memorandum that directs further actions on AI and security. We have been addressing these threats since foundation models appeared on the AI landscape several years ago and the risk of attacks, such as jailbreaks and model evasion attacks, has continued to increase unabated.
At CalypsoAI, our mission as a trailblazer in AI security has been to safeguard AI/ML models for cutting-edge and high-risk government agencies, including the Chief Digital and Artificial Intelligence Office (CDAO), Department of Homeland Security (DHS), and the Air Force (USAF), as well as many enterprise customers across financial services, technology, telecom, and healthcare.
Our efforts over the last four years have included the critical task of collaborating with government organizations to establish and contribute to a variety of industry standards and best practices. Notably, our work includes significant contributions to the NIST Trustworthy AI Standards (see our contributions from August 2021, early 2022, and September 2022), the DHS Deepfake Mitigation Measures, the National Artificial Intelligence Resources Task Force on Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem, the Test and Evaluation Challenges in Artificial Intelligence-Enabled Systems report for the USAF, and the Air Force and Army Working Group.
As a leading voice in the AI and LLM security domains, CalypsoAI is committed to equipping developers and enterprises with robust tools for managing risk while safeguarding models and users. Our advanced GenAI security and enablement solution defends in real time against numerous known attack strategies and novel threats devised by malicious actors, creating a safe, secure AI operational environment for organizations’ people, processes, and property.