Skip to main content
Aerial Imagery AI testing

Around the world, governments, defense agencies, and intelligence communities deploy cutting-edge technology to track changes to the global deployment of weapons systems as well as the movements of military units and security forces. These real-time activities—known as indications and warning (I&W) analysis missions—collect and analyze data with up-to-the-minute information on adversary movements and activities. 

This modern aerial imagery technology can offer strong spatial resolution (up to 5 cm per pixel), but the quality of this data, and how it is interpreted, can be the difference between mission success and failure.

In the case of the U.S. Department of Defense (DoD) and intelligence community (IC), these missions are performed through a combination of commercially available and government-developed intelligence, surveillance, and reconnaissance (ISR) platforms, which gather geospatial data at a rate of hundreds of terabytes per day—and growing rapidly. Only machine learning (ML) models can analyze, process, and derive information and intelligence from this vast amount of data. No human—or group of human analysts—has this capacity.

 

Risks of deploying unvalidated AI and ML models

While satellites and unmanned aerial vehicles (UAVs) excel at capturing large amounts of imagery and other data, source imagery can vary widely for many reasons, including the type of sensor (electro-optical, infrared, radar, etc.). Disruptions can also be seen due to weather anomalies, time of day (particularly lighting), and other unpredictable environmental conditions. Models that are not prepared for these variations are unlikely to succeed once deployed.

There is also the ever-increasing risk of adversarial attacks on AI models, including the manipulation of data at the point of collection. These attacks will prevent a model from identifying significant activity in named areas of interest (NAI), leaving military movements unknown and data findings incomplete. If stakeholders are left in the dark on these activities, I&W missions will face dire consequences.

Adversarial activities against AI models will continue to proliferate and become more complex in the months and years to come. If these attacks go unnoticed for any period of time, governments and intelligence communities will be pushed further behind in the global AI race. According to the final report from the U.S. National Security Commission on Artificial Intelligence (NSCAI), since AI systems rely on large data sets and algorithms, even small manipulations can lead to significant system failures. Concerningly, only three of 28 surveyed organizations in the report have tools in place to secure their ML systems.

To ensure dollars are wisely spent and lives are not put at risk, decision-makers must invest in tools to secure their AI systems. Attacks are evolving, and an automated testing process can support model security before, during, and after deployment.

 

How CalypsoAI’s VESPR Validate builds trust and drives mission success

CalypsoAI’s VESPR Validate is the solution for trustworthy AI, bringing confidence, security, and speed to the AI deployment process. Pre-deployment testing through VESPR Validate provides a clear understanding of the bounds for AI system success, testing a model’s resilience against obstructions, corruptions, and attacks.

VESPR Validate supports aerial ISR missions through the following capabilities:

Full Motion Video (FMV) testing: Test a video capture against the ML model, and receive results to identify the model’s weaknesses, such as misclassification or excluding particular details. 

Visual corruptions: Apply corruptions to an image to understand where the model performs well and when it will return erroneous results. This process corrupts or modifies the testing data to add in fog or blurs, identifying the point at which the model starts to fail and to what degree it will fail.

Security testing: These tests help to determine if your sensitive model is secure, making it more difficult for adversaries to manipulate the model through an attack. The solution can also effectively test model vulnerability to image attacks, manipulating the visual data being interpreted.

Through these capabilities, model vulnerabilities are quickly identified, enabling stakeholders to make rapid decisions on deployment. VESPR Validate identifies where additional model training is needed and guides MLops teams in their missions to efficiently develop and deploy models built to withstand obstructions, errors, and more.

AI trust means having confidence that deployed systems are performing as intended with security and speed. VESPR Validate is an essential tool in this mission to build trust in AI.

Learn more about VESPR Validate

Request a demo


1 “Aerial Imagery Explained. Top Sources and What You Need to Know,” Up42, https://up42.com/blog/tech/aerial-imagery-explained-top-sources-and-what-you-need-to-know

 2 The National Security Commission on Artificial Intelligence Final Report, https://reports.nscai.gov/final-report/table-of-contents/.