Please, rotate your device

Contact
Demo

What is AI Security?

Your AI is (probably) not safe.

As the world races ahead with AI, we have to ask:


Do these new digital systems bring risks?

You bet they do.

Artificial Intelligence (AI) is being rapidly integrated into business operations and everyday life. From in-home voice assistants to self-driving cars to business analytics and more, innovators are increasingly finding uses for algorithms that learn. In fact, according to IDC, “Worldwide spending on cognitive and artificial intelligence systems will reach $19.1 billion in 2018.”

This spending is predicted to continue to grow by 46.2% annually, reaching $52.2 billion by 2021. Many inside experts we’ve spoken with believe these numbers are more than twice as high if you include international academia and government spending.

But for all the spending on artificial intelligence research, there are still only a limited number of truly “intelligent” products out there. This is part due to well-founded fears that AI introduces new cyber risks into any digital ecosystem it interacts with. Because many AI products operate in the real world — including self-driving cars and facial recognition tools — these new AI risks can have real-world consequences.

Attacks against AI aren't theoretical — they are happening in the real world.

//left side
//end left side //right side
//end right side

Adversarial attacks are increasing in number and impact at a dramatic rate. As companies and organizations are often shy about revealing security flaws, a good proxy for the rapid growth in adversarial attacks can be found in academic literature. In 2012, only four academic papers discussed security flaws in AI systems. In 2018, 734 papers were written about the topic. At the same time, these flaws are being exploited by malicious actors in the real world. Even cutting-edge systems like a Tesla car have been shown to be susceptible.

One can interfere with a self-driving car by placing “45 MPH” stickers on stop signs, tricking the AI system into speeding through a controlled intersection. This is an important attack to understand, as no computer code is needed (although a familiarity with the algorithm would be useful) to hack the AI decision system. This illustrates just how different an adversarial attack on AI can be from what we think of as traditional cybersecurity and hacking.

Malicious actors can exploit personal digital assistants like Amazon’s Alexa. In one case, hackers were able to play “bird chirping noises” that were actually “whispered” commands that instructed Alexa to divulge its owner’s banking information.

One could condition an adversary’s surveillance satellites to identify tanks by an impermanent characteristic such as a geometric form painted on every tank. Then, during wartime, one could paint over those forms and severely compromise the satellite’s ability to track the movements of one’s armored divisions.

Let’s return to corporate and government spending on AI. Even a quick analysis of the technology’s maturation over the last few years explains the bullishness. For example…

Can you tell which face below is computer-generated?

Trick question. They all are.

In the three years between 2014 and 2017, the efficacy of AI technology improved exponentially — and it has continued to do so in the two years since. AI has been able to develop to the point where machines can not only fool humans, but each other.

Source: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Future of Humanity Institute.

https://arxiv.org/pdf/1802.07228.pdf

Fundamentally different than traditional cybersecurity

As with any emerging technology, AI isn’t all upside. For each of its paradigm-shifting benefits, AI presents a new security challenge for corporations, governments, and other users to solve. AI’s new risks partially stem from the fact that AI systems are always in collection mode. AI-powered tools rely on steady streams of data as they learn and make decisions in the physical world.

Keeping data secure as it passes through a traditional (i.e. non-AI) tech infrastructure is largely a matter of ensuring discrete data packets are encrypted while they’re at rest and in transit. In contrast, machine learning algorithms need to be exposed to new inputs in the wild that shape and continually refine their future behavior. Instead of the closed systems and environments that are the ultimate goal of traditional cybersecurity, AI systems are left partially open by design.

The open pathway between a data source (say, consumers’ credit card purchasing activity) and a data “refinery” (say, an algorithm trained to flag potential cases of credit card fraud) is a prerequisite of AI effectiveness. Of course, by providing their AI solution with continuous, real-time access to gushing streams of data, a developer creates an open point of entry that, if discovered and manipulated by adversarial actors, could be exploited toward any number of nefarious ends.

Because AI systems are always vulnerable, AI Security must be built into every single one. AI Security refers to an AI's ability to perform as required and be explainable to humans. These features are distinct from traditional cybersecurity, which focuses more broadly on data encryption and protection. These new risks require new solutions.

Robustness

The Robustness of an AI solution can be broken down into three components:

Bias

Model
Performance

Resistance to
Adversarial Attacks

Bias

Model Performance

Resistance to Adversarial Attacks

Bias

This refers to the internal bias, either human-introduced or naturally occurring, of the data used to train an AI algorithm.

Algorithms are rarely, if ever, inherently biased. Problems arise when stakeholders train their algorithms using biased datasets or feed their algorithms balanced datasets in an inconsistent or biased manner. There have already been some notable failures:

Amazon’s hiring AI developed in a way that was biased against female candidates.

Microsoft’s Twitter bot, Tay, became very racist, very quickly.

Resistance to Adversarial Attacks

Perhaps the most important component of Robustness is the ability to both withstand and defend against adversarial attacks. Malicious actors can determine important factors in an AI model’s performance and subsequently exploit the ways in which the AI makes its decisions.

There are many different types of adversarial attacks on AI, but a few common types are:

Evasion attacks:

An adversary leverages their understanding of an AI model to craft an input comprised of “noise data” that is imperceptible to humans, but instructive to the model.

Poison attacks:

An adversary injects incorrectly labeled data — the “poison” — into an algorithm’s training data, influencing the development of the algorithm’s classificatory architecture.

Exploratory attacks:

Calibrated less to compromise or manipulate an AI model than to use the model to gain insight into its owners’ sensitive data or proprietary algorithms, this strain of attack provides hackers with a powerful intelligence-gathering mechanism.

Model Performance

The ability of an AI to operate effectively in new and varied environments, whether that entails steering around cones as a self-driving car’s “navigational brain” or making a judgement call on behalf of an insurance claims provider. This includes how an AI performs in conditions that it wasn’t specifically tested for or wasn’t explicitly designed to tackle.

Machine learning algorithms in particular tend to be very “fragile.” When an AI is highly fragile, it won’t work well in new scenarios, meaning you can “break it” very easily. Additionally, AI models may perform in unexpected ways, passing training data tests but failing to perform in the ways that humans intend.

Explainability

A user’s ability to understand — or at least see — how and why an AI made the decision it made.

The second component of AI Security is Explainability. With many current AI solutions, users can’t trace back why an AI made the decision it made — the solution is a black box. But when you’re operating in regulated markets — insurance, financial services, transportation, medicine — you need to know why something happened, both for compliance purposes and to make sure you don’t repeat the same mistake over and over again. Some notable failures of AI security that have stemmed from insufficient explainability include...

BlackRock, the world’s largest asset manager, had to kick their AI offline because its programmers couldn’t explain how the algorithm was making its decisions to the risk management team.

Additionally, there’s always the risk of “unknown unknowns” — if you’re clueless as to how your AI is operating, you’re also clueless as to the reach of your (the solution’s) exposure to risk. If you don’t know how your AI works, you’ll have no way of proactively heading off adversarial attacks, or knowing whether you are exposing your business to unknown operational or legal risks.

There are several ways to go about testing and improving a solution’s explainability, including parameter analysis, impact testing, and characterization.

  • We can trace neural networks’ decision trees to understand why and how a decision was made.
  • In high dimensional spaces like tagging something through time or looking at healthcare data, tracing explainability gets really complicated. In fact, truly explainable AI remains an unresolved research challenge.

Moving forward, the effectiveness of AI deployments in nearly every context will hinge on the extent to which stakeholders are willing — and able — to attend to these critical requirements.

This is a societal challenge,
 not only a technological one.

The risks of insecure AI have far-reaching implications. The proliferation of AI tools will be stopped in its tracks — either by regulation or by companies not pushing to production — if the general public does not trust AI systems to overcome the challenge posed by something as insignificant as a sticker or make unbiased decisions about protected classes of people. Worse still, if insecure AI is released into the wild before its vulnerabilities are addressed, citizens, businesses, and even entire governments may be put at considerable risk.

Traditional cybersecurity companies were built to solve a specific set of problems — and, in many cases, they have done so quite effectively. But new technology demands a new class of expertise. Just as the cybersecurity industry enabled software to “eat the world,” a mature AI Security industry will enable society at large to reap the benefits of AI while minimizing its risks. And while the intelligence of these emerging tools may be artificial, their paradigm-shifting potential is entirely genuine.

AI Security is necessary to address the new risks of our digital age.

Calypso AI is the market leader in cybersecurity for AI systems.

Do you trust your AI?

Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.