In this video Nick Jovanovic, Vice President, Government, discusses the challenges that face the government market when deploying Artificial Intelligence projects. He further details the importance of ethical and trustworthy AI for global governments and how independent AI validation through VESPR Validate accelerates AI project success.
Q: How does VESPR Validate align with government market standards?
A: “There is a lack of governance right now in federal markets around AI, but there’s a huge amount of guidance. I can state an example from the Department of Defense (DoD) where they came out with a DoD Responsible AI Memorandum, and there’s a number of key tenants around reliability and transparency, and a number of other tenants, that we met very, very closely. In fact, the foundation of what we’re doing as an organization is to create transparency in the models and to truly understand whether or not the models themselves can be trusted by putting them into categories that people who don’t have a data science background can understand.” – Nick Jovanovic, VP of Government
Q: How does VESPR Validate’s user-friendly dashboard empower decision makers?
A: “When you look at translating data scientist talk to more of an executive summary that a senior leader can take a look at, we create a dashboard around the characteristics and attributes that the model tester puts the models through while running it through our tool VESPR Validate. So, that dashboard creates a very simplistic view around where the vulnerabilities are if you’re putting the model through real-world conditions. For example, if you’re using a full motion video and you’re running it through VESPR Validate, we can take a look at motion blur and see how that degrades the accuracy score of the actual model.