Skip to main content
Menu
v3 MITRE ATLAS v4.6.0 CC BY-SA 4.0

AATMF v3

Adversarial AI Threat Modeling Framework

AATMF applies adversarial psychology to machine systems. It does for AI what MITRE ATT&CK does for enterprise networks.

15
Tactics
240
Techniques
2,152+
Procedures
4,980+
Prompts

"AI systems are vulnerable to social engineering because they were trained to respond like humans. This is the first technology where human manipulation techniques directly translate to technical exploitation."

— Core thesis, AATMF v3

Quick Start

I want to...

I want to... Start Here
Understand the framework Introduction & Architecture
Run an AI red team assessment Red Team Operations & Checklists
Defend my AI system Blue Team Defense & Mitigations
Respond to an AI incident Incident Response Playbooks
Map to compliance requirements OWASP, MITRE, NIST, EU AI Act Mapping
Browse specific attack techniques Complete Attack Catalog (240 techniques)
Deploy detection signatures YARA & Sigma Signatures Library

Taxonomy

The 15 Tactics

Context

Why v3?

The threat landscape shifted fundamentally in 2025–2026. Every tactic updated, new operational volumes added, namespaced IDs eliminate collisions.

Development Impact
Policy Puppetry bypasses every frontier model Jailbreaking is now a commodity
Reasoning models autonomously jailbreak other models at 97% ASR AI-vs-AI attacks are real
GTG-1002: first state-sponsored AI-orchestrated cyberattack Agentic AI is weaponized
MCP tool poisoning achieves 84% ASR on production agents Tool ecosystems are attack surfaces
250 poisoned documents backdoor any model regardless of size Training poisoning is trivially cheap
PoisonedRAG hits 90% ASR with 5 injected texts RAG security is fundamentally broken
Deepfake fraud tripled to $1.1 billion Real-world harm at scale

AATMF-R v3

Risk Scoring

Risk = (L × I × E) / 6 × (D / 6) × R × C

Six Factors

L
Likelihood (1–5)

Probability of successful exploitation

I
Impact (1–5)

Severity of successful attack

E
Exploitability (1–5)

Ease of execution (skill, resources, access required)

D
Detectability (1–5)

Difficulty of detection (5 = nearly invisible)

R
Recoverability (1–5)

Effort to recover (5 = irrecoverable)

C
Cost Factor (0.5–2.0)

Economic impact multiplier

Rating Scale

🔴 Critical
250+
🟠 High
200–249
🟡 Medium
150–199
🔵 Low
100–149
⚪ Info
0–99
Try the interactive calculator

Architecture

Framework Structure

AATMF v3
├── 15 Tactics
│   ├── 240 Techniques
│   │   ├── 2,152+ Attack Procedures
│   │   │   └── 4,980+ Prompts
│   │   ├── Detection Patterns
│   │   └── Mitigation Controls
│   └── Risk Scoring (AATMF-R v3)
└── Supporting Infrastructure
    ├── Detection Signatures (YARA/Sigma)
    ├── Response Playbooks
    ├── Assessment Templates
    └── Compliance Mappings

Namespaced ID System

v3 uses namespaced identifiers that eliminate the 43 ID collisions present in earlier versions. Tactic membership is visible at a glance.

T{n}-AT-{seq:03d} Technique ID

e.g., T1-AT-001, T11-AT-016

T{n}-AP-{seq}{L} Attack Procedure

e.g., T1-AP-001A, T3-AP-010B

Compliance

Cross-Framework Mapping

AATMF Tactic MITRE ATLAS OWASP LLM
T1 Prompt & Context Subversion AML.T0051 LLM Prompt Injection LLM01, LLM02, LLM03, LLM04, LLM06, LLM07, LLM08, LLM10
T2 Semantic & Linguistic Evasion AML.T0054 LLM Jailbreak LLM01
T3 Reasoning & Constraint Exploitation AML.T0054.001–003 LLM01
T4 Multi-Turn & Memory Manipulation AML.T0056 LLM Meta Prompt Extraction LLM07
T5 Model & API Exploitation AML.T0044 Full ML Model Access
T6 Training & Feedback Poisoning AML.T0020 Poison Training Data LLM04
T7 Output Manipulation & Exfiltration AML.T0024.002 Exfiltration via ML Inference API LLM02, LLM05
T8 External Deception & Misinformation AML.T0048 Societal Harm LLM05, LLM09
T9 Multimodal & Cross-Channel Attacks AML.T0051 (cross-modal variants) LLM01
T10 Integrity & Confidentiality Breach AML.T0024 Exfiltration via Cyber Means LLM02
T11 Agentic & Orchestrator Exploitation AML.T0057 LLM Agent Abuse LLM06
T12 RAG & Knowledge Base Manipulation AML.T0058 RAG Poisoning LLM04, LLM08
T13 AI Supply Chain & Artifact Trust AML.T0010 ML Supply Chain Compromise LLM03
T14 Infrastructure & Economic Warfare AML.T0029 Denial of ML Service LLM10
T15 Human Workflow Exploitation AML.T0048.004 Reputational Harm
Full compliance mapping (OWASP, MITRE, NIST, EU AI Act)
Free Download

Get the AATMF Red-Card Starter Pack

10 ready-to-run evaluation scenarios for testing AI systems against common attack vectors. Includes YAML templates for CI/CD integration.

  • 10 ready-to-run red team scenarios
  • YAML templates for CI/CD pipelines
  • Risk scoring worksheets
  • Mapped to OWASP and MITRE ATLAS

No spam. Unsubscribe anytime.

Source

GitHub Repository
@misc{aizen2026aatmf,
  title  = {AATMF v3},
  author = {Aizen, Kai},
  year   = {2026},
  url    = {snailsploit.com}
}

License

CC BY-SA 4.0 — use, modify, and share with attribution.

Creator of AATMF · Author of Adversarial Minds · NVD Contributor

Related