
« Back to Glossary Index
Data poisoning is a type of adversarial attack where the data used to train an artificial intelligence (AI) or machine learning (ML) model is deliberately manipulated to compromise the model’s performance, introduce vulnerabilities, or skew its predictions.
Methods of Data Poisoning Include:
- Injection: Adding false or misleading data to the training dataset.
- Modification: Altering existing data to distort the model’s learning process.
- Deletion: Removing key portions of the dataset to create gaps in learning.
Impact:
Data poisoning can lead to biased predictions, reduced accuracy, and the introduction of backdoors that allow malicious exploitation. It is a key example of adversarial AI—attacks designed to undermine the integrity and reliability of AI/ML systems.