Please, rotate your device

Contact
Demo

What is AI Security?

By Davey Gibian, June 5, 2019

As the world races ahead with artificial intelligence, we need to take a step back and consider the new cyber and physical risks these cutting-edge systems bring.

As the world races ahead with artificial intelligence, we need to take a step back and consider the new cyber and physical risks these cutting-edge systems bring.

From smart voice assistants to self-driving cars to a range of business analytics platforms, algorithms that have the capacity to learn — i.e. machine learning — can be found at the heart of a growing number of both corporate- and consumer-facing innovations. “Interest and awareness of AI is at a fever pitch,” says IDC Research Director for Cognitive/Artificial Intelligence Systems David Schubmehl. “Every industry and every organization should be evaluating AI to see how it will affect their business processes and go-to-market efficiencies.”

By all accounts, they are. According to IDC’s Worldwide Semiannual Artificial Intelligence Systems Spending Guide, global spending on AI likely exceeded $19 billion last year, a 54 percent jump from 2017. What’s more, with AI set to be incorporated into 75 percent of enterprise applications within two years, IDC expects AI spending to skyrocket to over $52 billion in 2021.

The impetus for this rush of investment is no secret: AI has evolved from a curiosity with a great deal of potential into a bonafide technological powerhouse. For instance, at the beginning of the decade, even the best AI systems were only able to correctly categorize roughly 70 percent of the images they were shown. By 2018, this categorization accuracy had increased to 98 percent — surpassing even the average human’s accuracy (95 percent).

And yet, despite both this marked improvement and massive investment, precious few truly intelligent solutions have been brought to market. This is in part because AI introduces new cyber risks into every digital ecosystem into which it is deployed. As AI-enabled products and services become a central part of our everyday lives, their inherent risks will take on real-world consequences. With these risks in mind, it is incumbent upon AI developers to build systems with safeguards against worst-case scenarios.

However, delivering such safeguards is effectively impossible while operating within a traditional cybersecurity framework. That’s why, in recent years, a new field has begun to emerge: AI Security. In many respects, AI is different in kind from its technological predecessors, and those working in the AI Security space are striving to mitigate the new technology’s unique risks with equally new, equally powerful solutions.

Moving Beyond Traditional Cybersecurity

The primary difference between AI Security and traditional cybersecurity stems from the approaches’ divergent treatments of data. To keep them secure as they travel through a traditional (i.e. non-AI) tech infrastructure, discrete data packets are typically encrypted both while they are at rest and while they are in transit. “Access control,” “domain,” “firewall,” “virtual private network” — there is a reason so many cybersecurity terms have such an isolationist bent. A closed system is a secure system, period.

By contrast, AI systems are open by design (at least partially). To make meaningful self-improvements — this is, to actually learn — machine learning algorithms need access to steady streams of data. These new and varied inputs do not appear ex nihilo, but are collected on a rolling basis.

An open pathway between a data source (say, a consumer’s credit card purchasing activity) and a data “refinery” (say, an algorithm trained to flag potential cases of credit card fraud) is a necessary condition of any effective AI system. But by providing their AI system with continuous, real-time access to multiple streams of data, a developer creates an open point of entry that, if discovered and manipulated, could be exploited for any number of nefarious purposes. This new attack vector can originate in either the digital or physical world, compromising everything from smart cameras to voice sensors.

The goal of AI Security is to counteract this persistent — and, to a degree, inbuilt — vulnerability. Doing so requires a pivot from implementing packet-level protections like encryption to attending to two algorithm-level considerations: Robustness and Explainability.

AI Security, Part I: Robustness

The Robustness of an AI system can be trifurcated into (1) its lack of unwanted bias, (2) its model performance, and (3) its resistance to adversarial attacks. Keeping these elements in mind when developing an AI system goes a long way toward minimizing the risks the system will introduce into real-world environments.

Eliminating Bias

Algorithms are rarely, if ever, inherently biased. Instead, biases arise when developers and/or end users train their algorithms with biased datasets or feed their algorithms unbiased datasets in a biased fashion. Unless training data is thoroughly vetted for unwantedbias before being funneled to an AI system, the system may end up magnifying any underlying collection biases.

For instance, in 2014, engineers at Amazon used data culled from a decade’s worth of job applications and hires to train a machine learning algorithm to rate new applicants on a scale of one to five. According to one of the engineers, “[Amazon] literally wanted it to be an engine where [we’re] going to give [it] 100 résumés, it will spit out the top five, and we’ll hire those [people].”

Unfortunately, since the majority of applicants whose résumés were included in the training data were men, the algorithm “learned” that male candidates were preferable to female candidates, and even started penalizing résumés that featured the word, “women.” The engineers attempted to excise this bias from the algorithm, but Amazon chose to abandon the project shortly thereafter.

Even if an algorithm is trained with unbiased data — or if unwanted bias has otherwise been mitigated — if it is exposed to less-than-desirable inputs once it has been deployed, things can go south with remarkable velocity. In 2016, Microsoft unveiled its conversational AI, Tay, on Twitter. Within a matter of hours, Tay devolved from Tweeting “can I just say that im stoked to meet u? Humans are super cool” to the unacceptable “I f**king hate feminists and they should all die and burn in hell” and “Hitler was right I hate the jews.”

Microsoft claims Tay was simply mirroring the content it encountered in its engagements with Twitter users, and while this is certainly a damning indictment of the less savory elements of internet culture, it also demonstrates just how quickly AI can careen off the rails when it is unequipped to deal with environmental inputs.

Optimizing Model Performance

Ensuring the integrity of its algorithmic foundation is a large part of equipping an AI system to excel in an environment as fraught as Twitter. Assessing a system’s model performance enables developers to gauge how effectively the system will function in conditions in which it was not tested or explicitly designed to navigate. A system’s durability — that is, its capacity for dealing with new environments without breaking down — can inform anything from a self-driving car’s ability to steer around traffic cones to an automated actuary’s ability to make a judgement call about the risks of insuring a new client.

In 2017, the consequences of an exceedingly fragile model were cast into stark relief during a demonstration of Boston Dynamics’ humanoid robot, Atlas. While, for nearly its entire debut, Atlas performed remarkably — and, to some, terrifyingly — well, as it was attempting to excuse itself from the stage, Atlas tripped over a curtain and took a tumble to the ground. It had successfully shuffled all around the stage, lifted and carried boxes, and more, but its training had not prepared it to contend with dastardly drapery.

Embarrassment was arguably the most significant repercussion of Atlas’ fall, but this failure underscored the importance of preparing AI to deal with the unexpected — in a less controlled setting, a similar failure could have far graver consequences.

Resisting Adversarial Attacks

As illustrated above, biased and/or fragile AI presents a range of inherent risks, and these are only heightened when bad actors arrive on the scene. When a hacker is able to gain access to or deduce key factors that bear upon an AI system’s model performance, they are able to exploit these factors in a variety of malicious ways — if a bad actor knows how their adversary “thinks,” they are already halfway to the perfect crime.

Several years ago, adversarial attacks on AI were purely theoretical. It was hypothetically possible to confuse an AI system or force it to make a wrong move, but only within the confines of a tightly controlled lab environment. However, the last year has seen attackers hack the AI systems of big-name companies including Tesla and Amazon.

The rise of these attacks in the real world parallels a rise in academic research on the topic. While, in 2012, there were only four academic papers published about adversarial attacks against AI, by 2016, there were 91. In 2017 and 2018, this tally jumped to 375 and 734, respectively. This massive spike in research interest in the industry has been precipitated by the lack of security controls being built into AI systems. Today, the threat of adversarial attacks is the number one security concern for AI.

Many common types of adversarial attacks directed at AI fall into one of three categories: evasion attacks, poisoning attacks, or exploratory attacks.

In an evasion attack, an adversary leverages their understanding of an AI system’s model to craft an input comprised of “noise data” that is imperceptible to humans, but manipulatively instructive to the system. This strain of attack does not require a hacker to compromise their target, but simply “game the system.”

For instance, last fall, researchers at the Ruhr-Universitӓt in Bochum, Germany, revealed they had figured out a way to trick smart voice assistants like Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana into divulging their owners’ highly sensitive information. Using “psychoacoustic hiding,” the researchers delivered manipulated audio waves to the assistants. These waves sounded like chirping birds to the human ear, but the assistants registered them as commands to transmit their owners’ private data. This experiment stands as Exhibit A of why, even if an AI system’s algorithm is not manifestly flawed, the system may still be vulnerable to attack.

A poisoning attack requires more forethought. In this strain of attack, an adversary injects incorrectly labeled data — the “poison” — into an algorithm’s training data. As the algorithm learns, the poison surreptitiously shapes the development of its classificatory architecture, training it to recognize something other than what was intended. Strictly speaking, an adversary need not even hack a system’s training data to execute an effective poisoning attack.

For instance, imagine an AI-powered military satellite that is designed to track the movements of an enemy’s tanks. If the enemy discovers the satellite is going to be trained with surveillance images captured during a forthcoming window of time, it might paint a distinct geometric form on top of each of its tanks. Because algorithms learn by homing in on common characteristics, the satellite might start identifying objects as tanks by virtue of their featuring this form. Once this connection is established, the enemy would be able to undermine the satellite’s effectiveness simply by giving its armored divisions a new paint job.

Finally, exploratory attacks provide hackers with a powerful intelligence-gathering mechanism. Unlike evasion and poisoning attacks, this strain of attack is calibrated less to compromise or manipulate an AI model than to use the model to gain insight into its owners’ sensitive data or proprietary algorithms.

In the former case — a model inversion attack — an adversary feeds a model a stream of inputs, making careful note of the outputs that are produced. This enables the adversary to deduce details about the system’s training data, data that in some circumstances may be immensely valuable. In the latter case — a model extraction attack — an adversary performs a similar input/output analysis in order to reverse engineer an AI system’s underlying model. In our age of intense cyber-competition, the value of co-opting an adversary’s proprietary tools in this manner cannot be overstated.

It is similarly difficult to overstate the scope of risk presented by adversarial attacks against AI systems. Unlike traditional cyberattacks — which generally exploit zero day vulnerabilities in digital systems — attacks against AI systems can be executed using entirely digital mechanisms (input/output analyses), entirely physical mechanisms (strategic paint jobs), or a mix of both (faux bird chirps).

AI Security, Part II: Explainability

In addition to its Robustness, an AI system’s security also depends on its Explainability. Of the few truly intelligent solutions on the market, the majority do not afford users the ability to explore how the AI arrived at its decisions — in other words, they are black boxes.

Such opacity is problematic across the board — among other things, it creates conditions wherein an AI system can make the same mistakes over and over again without its users catching on — but it is disqualifying in highly regulated industries like insurance, financial services, medicine, and transportation. In these spaces, process auditing is at the core of compliance, and if an AI system’s users are unable to furnish an explanation of why it behaved like it did, regulators’ retaliation will be swift.

In fact, in November, the specter of compliance violations prompted the world’s largest asset manager, BlackRock, to sideline two AI models — one designed to forecast market volume, the other to predict redemption risk. Despite their demonstrated promise, these models were deemed too opaque because, as BlackRock Head of Liquidity Research Stefano Pasquali explains, “The senior people want[ed] to see something they [could] understand.”

Beyond compliance considerations, insufficient Explainability can both undercut an AI system’s operational efficacy and render it more vulnerable to adversarial attacks. If a system is virtually unexplainable, its users not only face the known risks outlined above, but, to borrow some Rumsfeldian phraseology, also face “unknown unknowns.” And while these unknowns may be unknown to the system’s users, they may very well be front and center on any number of adversaries’ radar. Needless to say, if an organization is unaware of the details of its exposure, it will be ill-prepared to proactively head-off attacks aimed at exploiting its AI vulnerabilities.

However, perhaps the most striking aspect of this conversation is that completely transparent, completely explainable AI remains an unresolved research challenge. From parameter analysis and impact testing to tracing neural networks’ decision trees, there are ways to go about improving an AI system’s Explainability, but these methods have yet to coalesce into a foolproof approach.

Looking Forward

Ultimately, the magnitude of the returns on the ongoing rush of investment in AI will hinge on stakeholders’ ability to address the risks presented by the technology’s insecurities. The still nascent rollout of AI solutions will be stopped dead in its tracks if the general public does not trust AI to make unbiased decisions about protected classes of people or to overcome the confusion caused by “chirped” commands. Worse still, if insecure AI is released into the wild before its vulnerabilities are addressed, citizens, businesses, and even entire governments may be put at considerable risk.

Traditional cybersecurity companies were built to solve a specific set of problems, and in many cases, they have done so quite adeptly. But new technology demands a new class of expertise. Moving forward, it will take a mature Cybersecurity for AI industry to enable society at large to reap the rewards of AI.

A new world is at our fingertips, but as with any mode of exploration, it is important to tread lightly as we forge ahead. For us at Calypso AI, that means building Cybersecurity for AI that mitigates the increasing risks associated with adversarial attacks while simultaneously helping our clients navigate the other aspects of their AI systems’ potential security risks.

In short, we are working towards a world where AI is trusted and secure.

Do you trust your AI?

Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.