By Ilja Moisejevs, July 23, 2019
This post originally appeared in Towards Data Science, a Medium publication.
We live in crazy times. I remember watching Star Wars as a kid and wondering how long it would take until we’d actually have speaking robots in our kitchen. Not very long — as it turned out. Fewer than 10 years in fact.
AI and more specifically machine learning is really bringing science fiction to reality — there just is no other way to put it. Every time I flip through Tech Review or TechCrunch I’m blown away by the kind of things we can now “casually” do.
Seeing through walls? Easy. Guessing materials’ physical properties from video? Done. Estimating keystrokes from keyboard sound? Piece of cake. And how about generating realistic looking faces, bodies, or poems? Or teaching machines to draw? Or a game of StarCraft?
Oh, and have you seen these things roaming around?
Now if you actually go talk to people working in AI/ML — you’ll probably get one of two responses. Either they are beyond excited about what AI/ML can do and are working on the next big vision / NLP / Reinforcement Learning problem — or — they are absolutely terrified about what we stupid humans are building and believe that soon Artificial General Intelligence will convert humanity into a useless mass of paperclips. That feels to me like the general split of the community today — 50% of people think AI is our future, 50% think it’s our demise.
I want to offer a third — perhaps a more mundane — perspective on what AI and machine learning are: a new attack surface for adversaries.
Whenever a new invention comes out, most people tend to think about the new and amazing capabilities that the invention enables. But, where there is light, there must be shadow, and so new capabilities inadvertently come packaged with new vulnerabilities for hackers to exploit. And exploit them they do.
Let’s take a history lesson and revisit the PC market. The first PC (Altair 8800) was released in 1975 followed by a series of innovations in the next 10 years culminating with the now too familiar-looking, moused “Apple Macintosh” in 1984. What followed was an explosive adoption wave that continued through all of 1990s and well into the 2000s:
Source: retrocomputing + wiki.
Unbeknown to most users, however, a similar explosion was taking place in the malicious software or “malware” market.
In 1989 Robert Morris experimented with Unix sendmail and built a self-replicating worm that it sent out into the internet. What started as a mere experiment ended up becoming the first DoS attack, causing damages estimated between $100,000 and $10,000,000, and slowing the entire internet down to a crawl for several days (unthinkable now of course).
This was followed by the first ransomware attack in 1989, the first Linux virus (“Staog”) in 1996, and the first AOL Trojan in 1998.
Malware stats source: av-test (through wiki).
Later, same happened in mobile: the 2007 iPhone moment followed by explosive growth in smartphone adoption:
…followed by explosive growth in mobile malware:
Malware stats source: av-test.
Now — what about machine learning?
Despite all the buzz, productization of machine learning is still nascent. A lot of the really cutting-edge work is still confined to research labs and Universities — but even looking at research, we can start to see some of the same trends emerge.
Machine learning research paper count by year and by area:
…vs “adversarial machine learning” (ML’s version of malware) research paper count:
So things are coming. Time to panic?
Not so fast. The good news is that as PCs took over our everyday lives and hackers got hackin’, yet another market developed in parallel — that of security solutions.
The first anti-virus product was developed in 1987 by Andreas Lüning and Kai Figge for the Atari ST platform. That same year McAffee, NOD, Flu Shot and Anti4us all come to life — and so did many more in the following two decades:
Companies source: wiki + news + crunchbase.
Soon VCs realized how big cybersecurity was going to be, and capital started to flow in:
…followed by multi-million dollar acquisitions:
Mobile, following rapid growth in mobile malware, saw a similar explosion in security players:
Companies source: wiki + news + crunchbase.
…and, eventually, security acquisitions:
And so what about machine learning?
At some point in the past I used to run anti-fraud and anti-money-laundering for one of the biggest British fintechs — GoCardless. My team oversaw $10bn+ in transaction volume a year and we were constantly battling to keep crooks out of GC’s circulatory system. Naturally — at some point we succumbed to the hype and decided to try machine learning.
To my surprise at the time, it worked. In fact, it worked great. Moving from traditional heuristics we managed to reduce money lost to fraud by 80% and improved the detection rate for accounts suspected of money laundering by as much as 20x.
There was just one problem.
We were deploying machine learning in what I consider “critical” capacity. We handed the algorithm a job at which it was not allowed to fail — as if it did — we’d either lose a ton of money or get our financial license revoked. For me as the Product Manager directly responsible for GC’s security — neither sounded particularly exciting.
So I needed to know how and when ML could fail. How could our model be exploited? Where did its inherent vulnerabilities lie? How do I know if GoCardless is under attack?
After too many late nights spent reading ML papers and sniffing around the darkweb — I finally found what I was looking for. I learnt about poisoning attacks on ML, where the attacker is able to influence the model’s thinking by injecting corrupted data during training. I discovered adversarial examples, and how models could easily be misled by carefully perturbed inputs at test time. Finally, I learnt about privacy attacks and that neither the underlying data nor the model itself were really that private.
Then, I found this…
Source: MIT tech review.
…and I was terrified.
By the end of 2019 1/3rd of all enterprises will have deployed machine learning. That’s 1/3rd of all products you, me, and our friends and loved ones use on a daily basis — completely naked before any attacker who knows the slightest thing about how ML works.
Yes, machine learning needs security.
ML security is a very nascent field — basically non-existent as of today. If there was one thing I learnt from my research above, it was that anyone who didn’t have a math PhD would have a hard time figuring how to make their ML secure (there are practically no solutions today, just math-heavy research papers).
Given how much of our lives we’re about to entrust to algorithms — I think it’s our responsibility — yours, mine, and the entire ML community’s to make sure security is not left behind as an afterthought. There’s a ton we can do today to build more robust ML models — as I explain in my posts on evasion, poisoning, and privacy attacks. But more importantly, we need to go through a mindset shift — away from “accuracy at all costs” to a more balanced accuracy vs robustness approach:
Source. C1 and C2 are two models. C1 clearly starts off as less accurate, but as the attack strength increases, it also does a better job at withstanding it. Would you pick C1 or C2 as your ML model?
This post and the ones above are my attempt at taking the first baby steps towards a more robust ML future. If you found it insightful — be sure to share the insight forward.
Stay safe & secure everyone.
Get in touch to learn how we can identify your systems’ vulnerabilities and keep them secure.