| Author/Contributor(s): | Dursey, Philip |
| Publisher: | No Starch Press |
| Date: | 7/28/2026 |
| Binding: | Paperback |
| Condition: | NEW |
Written for security professionals, researchers, and AI practitioners, this field manual goes beyond theory. You’ll learn how to map the new AI attack surface, anticipate adversarial moves, and simulate real-world threats to uncover hidden vulnerabilities.
You’ll Learn How To:
- Think in graphs, not checklists: trace attack paths through interconnected AI components, data pipelines, and human interactions
- Poison the well: explore how adversaries corrupt training data to implant backdoors and erode model integrity
- Fool the oracle: craft evasion attacks that manipulate AI perception at decision time
- Hijack conversations: execute prompt injection attacks that turn Large Language Models into insider threats
- Steal the brain: probe for model extraction and privacy attacks that compromise valuable IP
- Conduct full-spectrum campaigns: use the STRATEGEMS framework and the AI Kill Graph to plan, execute, and report professional-grade red team engagements
Traditional security methods can’t keep up with adversarial AI. From manipulated financial agents to compromised autonomous vehicles, real-world failures have already caused billions in losses and threatened lives. Red Teaming AI equips you to meet this challenge with practical techniques grounded in real attack scenarios and cutting-edge research.