Primers on Adversarial Machine Learning

Don’t trust the robots. They are not secure. Read Ilja’s primers on adversarial machine learning.

In order to shed light on the world of adversarial machine learning, CalypsoAI staff member Ilja Moisejevs has prepared a series articles / Primers on Towards Data Science to inform readers as to the cutting art of the science and the risks it brings. 

What Everyone Forgets about Machine Learning – Provides an overview of machine learning security threats and the parallels of these threats to traditional cybersecurity.

Will my Machine Learning System be Attacked – Here, CalypsoAI details our threat model for machine learning systems and provides a blueprint for understanding attacks.

Poisoning Attacks on Machine Learning – This is a primer on poisoning attacks to machine learning, including information on how an attacker can poison a data lake to install a backdoor.

Evasion attacks on Machine Learning (or “Adversarial Examples”) – The most common form of attack on machine learning systems, evasion attacks are something all machine learning users must be aware of and defended against.

Privacy Attacks on Machine Learning – Models and data can be stolen. Here CalypsoAI discusses the state of the art.

Learn more about Adversarial ML in this Use Case.

learn more

Request a demo with CalypsoAI and learn how we can work for you.

Request Demo