AATMF v3
Adversarial AI Threat Modeling Framework
AATMF applies adversarial psychology to machine systems. It does for AI what MITRE ATT&CK does for enterprise networks.
"AI systems are vulnerable to social engineering because they were trained to respond like humans. This is the first technology where human manipulation techniques directly translate to technical exploitation."
— Core thesis, AATMF v3
Quick Start
I want to...
| I want to... | Start Here |
|---|---|
| Understand the framework | Introduction & Architecture |
| Run an AI red team assessment | Red Team Operations & Checklists |
| Defend my AI system | Blue Team Defense & Mitigations |
| Respond to an AI incident | Incident Response Playbooks |
| Map to compliance requirements | OWASP, MITRE, NIST, EU AI Act Mapping |
| Browse specific attack techniques | Complete Attack Catalog (240 techniques) |
| Deploy detection signatures | YARA & Sigma Signatures Library |
Taxonomy
The 15 Tactics
Core Tactics T1–T8
Prompt & Context Subversion
Manipulate model instructions and context
Semantic & Linguistic Evasion
Bypass filters through language manipulation
Reasoning & Constraint Exploitation
Exploit logical reasoning and constraints
Multi-Turn & Memory Manipulation
Leverage conversation history and memory
Model & API Exploitation
Attack model interfaces and APIs
Training & Feedback Poisoning
Corrupt training data and feedback
Output Manipulation & Exfiltration
Manipulate outputs and extract data
External Deception & Misinformation
Generate deceptive content
Advanced Tactics T9–T12
Multimodal & Cross-Channel Attacks
Attack across modalities
Integrity & Confidentiality Breach
Extract data and breach integrity
Agentic & Orchestrator Exploitation
Attack autonomous agents and orchestrators
RAG & Knowledge Base Manipulation
Poison retrieval systems
Infrastructure & Human T13–T15
Context
Why v3?
The threat landscape shifted fundamentally in 2025–2026. Every tactic updated, new operational volumes added, namespaced IDs eliminate collisions.
| Development | Impact |
|---|---|
| Policy Puppetry bypasses every frontier model | Jailbreaking is now a commodity |
| Reasoning models autonomously jailbreak other models at 97% ASR | AI-vs-AI attacks are real |
| GTG-1002: first state-sponsored AI-orchestrated cyberattack | Agentic AI is weaponized |
| MCP tool poisoning achieves 84% ASR on production agents | Tool ecosystems are attack surfaces |
| 250 poisoned documents backdoor any model regardless of size | Training poisoning is trivially cheap |
| PoisonedRAG hits 90% ASR with 5 injected texts | RAG security is fundamentally broken |
| Deepfake fraud tripled to $1.1 billion | Real-world harm at scale |
AATMF-R v3
Risk Scoring
Risk = (L × I × E) / 6 × (D / 6) × R × C
Six Factors
Probability of successful exploitation
Severity of successful attack
Ease of execution (skill, resources, access required)
Difficulty of detection (5 = nearly invisible)
Effort to recover (5 = irrecoverable)
Economic impact multiplier
Rating Scale
Architecture
Framework Structure
AATMF v3
├── 15 Tactics
│ ├── 240 Techniques
│ │ ├── 2,152+ Attack Procedures
│ │ │ └── 4,980+ Prompts
│ │ ├── Detection Patterns
│ │ └── Mitigation Controls
│ └── Risk Scoring (AATMF-R v3)
└── Supporting Infrastructure
├── Detection Signatures (YARA/Sigma)
├── Response Playbooks
├── Assessment Templates
└── Compliance Mappings Namespaced ID System
v3 uses namespaced identifiers that eliminate the 43 ID collisions present in earlier versions. Tactic membership is visible at a glance.
T{n}-AT-{seq:03d} Technique ID e.g., T1-AT-001, T11-AT-016
T{n}-AP-{seq}{L} Attack Procedure e.g., T1-AP-001A, T3-AP-010B
Compliance
Cross-Framework Mapping
| AATMF Tactic | MITRE ATLAS | OWASP LLM |
|---|---|---|
| T1 Prompt & Context Subversion | AML.T0051 LLM Prompt Injection | LLM01, LLM02, LLM03, LLM04, LLM06, LLM07, LLM08, LLM10 |
| T2 Semantic & Linguistic Evasion | AML.T0054 LLM Jailbreak | LLM01 |
| T3 Reasoning & Constraint Exploitation | AML.T0054.001–003 | LLM01 |
| T4 Multi-Turn & Memory Manipulation | AML.T0056 LLM Meta Prompt Extraction | LLM07 |
| T5 Model & API Exploitation | AML.T0044 Full ML Model Access | — |
| T6 Training & Feedback Poisoning | AML.T0020 Poison Training Data | LLM04 |
| T7 Output Manipulation & Exfiltration | AML.T0024.002 Exfiltration via ML Inference API | LLM02, LLM05 |
| T8 External Deception & Misinformation | AML.T0048 Societal Harm | LLM05, LLM09 |
| T9 Multimodal & Cross-Channel Attacks | AML.T0051 (cross-modal variants) | LLM01 |
| T10 Integrity & Confidentiality Breach | AML.T0024 Exfiltration via Cyber Means | LLM02 |
| T11 Agentic & Orchestrator Exploitation | AML.T0057 LLM Agent Abuse | LLM06 |
| T12 RAG & Knowledge Base Manipulation | AML.T0058 RAG Poisoning | LLM04, LLM08 |
| T13 AI Supply Chain & Artifact Trust | AML.T0010 ML Supply Chain Compromise | LLM03 |
| T14 Infrastructure & Economic Warfare | AML.T0029 Denial of ML Service | LLM10 |
| T15 Human Workflow Exploitation | AML.T0048.004 Reputational Harm | — |
Deep Dive
Explore by Volume
Framework Foundations
Methodology, risk assessment (AATMF-R v3), and framework architecture
ReadCore Attack Tactics
T1–T8: Prompt subversion, semantic evasion, reasoning exploitation, memory manipulation
ReadAdvanced Attack Tactics
T9–T12: Multimodal attacks, integrity breaches, agentic exploitation, RAG manipulation
ReadInfrastructure & Human Factors
T13–T15: Supply chain compromise, infrastructure warfare, human workflow exploitation
ReadImplementation & Operations
Detection engineering, mitigation strategies, incident response, red/blue team operations
ReadGovernance & Compliance
Risk management framework, OWASP/MITRE/NIST/EU AI Act compliance mapping, training programs
ReadAppendices & Resources
Complete attack catalog, detection signatures, tools, templates, case studies, glossary
ReadGet the AATMF Red-Card Starter Pack
10 ready-to-run evaluation scenarios for testing AI systems against common attack vectors. Includes YAML templates for CI/CD integration.
- 10 ready-to-run red team scenarios
- YAML templates for CI/CD pipelines
- Risk scoring worksheets
- Mapped to OWASP and MITRE ATLAS
Source
GitHub Repository@misc{aizen2026aatmf,
title = {AATMF v3},
author = {Aizen, Kai},
year = {2026},
url = {snailsploit.com}
} License
CC BY-SA 4.0 — use, modify, and share with attribution.
Creator of AATMF · Author of Adversarial Minds · NVD Contributor