« Back to Glossary Index
A defensive security testing/assessment process that involves having a group of people (the “blue team”) respond to a simulated adversarial attack intended to penetrate or corrupt the target, such as artificial intelligence systems or models; used to identify security risks and vulnerabilities and test the strength of defenses, including both human and technological intelligence
See Red Teaming and Purple Teaming
« Back to Glossary Index