Skip to main content

This blog post originally appeared in CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.

Artificial Intelligence creators and consumers continue to face challenges presented by the brittle nature of today’s systems. “Machine learning is an incredibly powerful tool because models ingest massive quantities of information, and we no longer need to annotate every single edge case,” explains Victor Ardulov, CalypsoAI’s Founding Scientist. However, these systems are still rife with technical vulnerabilities. Therefore, “it’s important to understand what perturbs or breaks your model. Without that information, you’re unable to properly gauge the efficacy of your AI,” says Ardulov.

Five years ago, several automakers, including Nissan and Toyota, promised self-driving cars by 2020. Of the many roadblocks that manufacturers faced, naturally occurring activities such as sunlight continued to cause technical failures known broadly as “brittleness” and have stalled their release. In January 2020, for example, Tesla recalled 500,000 vehicles following a report to the US National Highway Safety Administration that detailed 110 crashes and 52 injuries resulting from their semi-autonomous vehicles suddenly and unexpectedly accelerating due to the vehicle’s sensors misinterpreting the world around it. The following month, the Transportation Safety Board blamed Tesla for the fatal high-speed crash of a vehicle that steered itself into the highway median while driving in Autopilot. Against the backdrop of a growing number of semiautonomous car crashes, the brittleness of self-driving vehicles indicates that more testing and evaluation (T&E) of AI/ML systems is still required. Notably, companies like Waymo and Ford, who halted testing due to COVID-19, released their testing metrics as open-source datasets, challenging developers worldwide to improve upon their algorithms, underscoring a continuing need for better data to improve future semi-autonomous vehicles.

In addition to system sensitivity caused by the natural world, ML model deployment continues to be stalled due to the technical challenges created by Adversarial Examples. Also called Evasion Attacks, Adversarial Examples are carefully perturbed inputs that look and feel like the same image to the human eye. Bad actors intentionally perturb the image to trick the model, rendering the AI system useless in the real world. In the figure below, an adversarial input overlaid on a typical image can cause a model to misclassify a cat as guacamole with 99% confidence.

The slightly perturbed image misleads a network to classify an image of a cat as guacamole. From Proceedings of the 35th International Conference on Machine Learning, 2018.

Adversarial attacks have shown to be dangerous. In February 2020, McAfee’s Advanced Research Team hacked multiple Tesla models, manipulating the vehicle’s Mobileye EyeQ3 camera system to misread speed limit signs, causing vehicles to accelerate by 50 mph over the speed limit. AI security researchers similarly found that semi-autonomous vehicles using computer vision to navigate are often tricked by internationally manipulated stop signs (see below), causing vehicles to accelerate through an intersection rather than stop. DARPA funded a Machine Vision Disruption program in 2019. In 2020, researchers at the University of California, Riverside received a grant from the DARPA program, and are helping to inform a broader lens within computer vision, providing AI with additional context clues, enabling better decision-making.

Example of an Adversarial Attack on a traffic sign recognition model. From the Machine Learning Center at Georgia Tech, 2018.

One way to compensate for the brittle nature of autonomous systems is to retain tight control over their operations. Until AI is trusted and verifiable, a human-in-the-loop is necessary. Further, an acknowledgment of AI brittleness and an understanding of where a model breaks is foundational to continued success in the field of AI/ML and will enable the development of robust fully-autonomous systems in the near future.

In theory, while humans can act as an independent fail-safe, effective human intervention hinges on the speed of operations. In the case of self-driving cars, while a driver technically has the ability to grab the wheel at any time, the driver merely has the illusion of control if the vehicle is traveling at highway speeds.

“We still don’t understand what our brains have that neural networks lack with regard to making determinations about an image,” says Ardulov. “However, it’s important to understand where that point is within your model. Is removing three pixels from a cat’s face, but not two, the tipping point for the model to determine that it’s looking at guacamole?”

By developing a common operating picture of where a model breaks, data scientists and mission owners alike will be empowered to make well-informed decisions about how and whether to deploy their AI or if their model requires additional training. Through this deeper understanding, teams can appropriately gauge their AI’s trustworthiness, enabling leaders to comprehensively address complex challenges with the support of ML-powered decision-making tools.

CalypsoAI enables AI creators and consumers to comprehensively address the challenges inherent to AI development. Our solution to AI/ML brittleness and system sensitivity is our Secure Machine learning Lifecycle (SMLC) test and evaluation environment, called VESPR. Our SMLC process draws from the widely adopted Secure Sofware Development Life Cycle used by organizations to build secure applications. VESPR enables machine learning creators and consumers to manage threats, enable model security, and verify the validity, efficacy, and trustworthiness of AI by integrating best practices and testing throughout each stage of the life cycle. As the industry leader in building the tools for Robust and Reportable AI, CalypsoAI has worked in tandem with government and private sector partners to develop robust algorithm assessment toolkits, designed to combat problems such as data bias, adversarial attacks, and model drift.

AI offers solutions to aid in cultivating a better, more ethical world. However, AI’s potential to positively impact society is limited, absent a deep understanding of the data that drives our autonomous decision-making tools. CalypsoAI delivers solutions that stand at the forefront of emerging global AI standards. Our mission is to shape a better world that values trust and transparency in technology.


This excerpt originally appeared in CalypsoAI’s 2020 State of the Union Report. To read the full report, click here.