
Last week, the National Institute of Standards and Technology (NIST) released the second draft of its Artificial Intelligence Risk Management Framework (RMF). The RMF is designed to provide guidance to address risks in the design, development, use, and evaluation of AI products, services, and systems.
CalypsoAI has provided regular feedback to NIST at each stage of the development of this RMF. We initially responded to the initial request for information in August 2021. The first draft of the RMF was released in March 2022, and CalypsoAI responded by encouraging the institute to promote rigorous, independent testing and validation of machine learning (ML) models in future drafts.
We are pleased to see that this new draft from NIST prominently includes mention of testing, evaluation, verification, and validation (TEVV) of AI systems as part of its guidelines. As the RMF states, “TEVV tasks are performed by AI actors who examine the AI system or its components, or detect and remediate problems throughout the AI lifecycle.”
TEVV tasks are called out at every stage of the AI lifecycle, including design and planning (validation of capabilities relative to the intended context of application); development (pre-deployment model validation and assessment); deployment (system validation, with recalibration based on internal and external factors); and operations (ongoing monitoring and testing). This illustrates the positive impact a repeatable, trustworthy AI pipeline can have at all stages, and for all stakeholders.
CalypsoAI will continue to review the RMF and provide feedback to NIST in order to support the creation of a robust set of guidelines for AI development. However, we are pleased with the emphasis that NIST has placed on testing and validation of AI at this stage, and combined with the DoD’s recent Responsible AI strategy, it is clear the U.S. government is making this critical aspect of AI development a priority.
NIST plans to officially publish the first version of the AI RMF in January 2023.