Skip to content

Is half your workforce breaking AI policy? | The AI Insider Threat Report

Read Now
Uncategorized
14 Jan 2025

Data Poisoning

Data Poisoning

Data Poisoning

Data poisoning is a type of adversarial attack where the data used to train an artificial intelligence (AI) or machine learning (ML) model is deliberately manipulated to compromise the model’s performance, introduce vulnerabilities, or skew its predictions. Methods of Data Poisoning Include:
  • Injection: Adding false or misleading data to the training dataset.
  • Modification: Altering existing data to distort the model's learning process.
  • Deletion: Removing key portions of the dataset to create gaps in learning.
Impact: Data poisoning can lead to biased predictions, reduced accuracy, and the introduction of backdoors that allow malicious exploitation. It is a key example of adversarial AI—attacks designed to undermine the integrity and reliability of AI/ML systems.

To learn more about our Inference Platform arrange a callback.

Latest Posts

Blog

CalypsoAI Achieves SOC 2 Certification

News

CalypsoAI’s Insider AI Threat Report: 52% of U.S. Employees Are Willing to Break Policy to Use AI

News

Beyond Human Hackers: Agentic AI Becomes the Primary Threat Actor