Hiring criteria, methodology selection, tooling, reporting cadence, and integration with the broader security org.
Adversarial AI is a different muscle than appsec or pentest. Mandate = continuous offensive testing of LLM-based products + advisory on secure design from sprint-zero.
Successful AI red teamers come from prompt engineering, applied research, social engineering, or appsec — not certs. Look for portfolios of original adversarial work.
Without a taxonomy you're playing whack-a-mole. AATMF gives 15 tactics + 240+ techniques + AATMF-R scoring. NIST AI RMF + MITRE ATLAS are complements.
AATMF Toolkit + LLM Red Teamer's Playbook + Burp MCP toolkit + your own custom adapters. Avoid one-off scripts.
Findings → severity-based remediation SLAs. Quarterly AATMF-coverage reports. Annual external red team (rotate vendors).
AI red team findings flow into the same vulnerability-management pipeline as appsec. ML platform team owns model-side fixes; product team owns prompt/agent fixes.