Skip to main content

Think you can outsmart AI? Announcing ‘Behind The Mask’ – Our all-new cybercrime role-playing game | Play Now

Red Teaming

« Back to Glossary Index

An offensive security testing/assessment process that involves having a group of people (the “red team”) simulate an adversarial attack to penetrate or corrupt the target, which could be artificial intelligence systems or models, policies, or assumptions; used to identify vulnerabilities, demonstrate the effect of a potential attack, or test the strength of defenses

See Blue Teaming and Purple Teaming

« Back to Glossary Index