Skip to main content
Menu

Volume VI: Governance & Compliance

Risk management framework, regulatory compliance mapping, and training programs for building organizational resilience against adversarial AI threats.

Risk Management Framework

AI Risk Governance Structure

Role Responsibilities
CISO / AI Security Lead Overall accountability, risk acceptance decisions, board reporting
AI Red Team Lead Assessment planning, technique development, findings review
ML Engineering Lead Model security, training pipeline integrity, deployment hardening
Data Governance Training data provenance, RAG source quality, data poisoning detection
Legal / Compliance Regulatory mapping, incident notification, liability assessment
Product Security Integration security, API hardening, agent permission design

Risk Assessment Process

1

Asset Inventory

Catalog all AI models, agents, RAG systems, training pipelines, and inference infrastructure

2

Threat Modeling

Map assets to applicable AATMF tactics using the framework architecture

3

Technique Assessment

For each applicable technique, score using AATMF-R v3

4

Control Evaluation

Document existing mitigations, identify gaps

5

Risk Calculation

Aggregate technique scores to tactic-level and system-level risk

6

Treatment

Accept, mitigate, transfer, or avoid each identified risk

7

Continuous Monitoring

Deploy detection engineering, schedule periodic reassessment

Risk Treatment Decision Framework

Level Treatment
Critical Must mitigate. No acceptance without CISO sign-off and compensating controls.
High Mitigate within sprint. Risk acceptance requires documented justification.
Medium Schedule remediation. May accept with monitoring.
Low Accept with documentation. Monitor for escalation.
Info Document. No action required.

OWASP LLM Top 10 2025

Complete mapping between OWASP LLM Top 10 categories and AATMF tactics. AATMF provides the granular technique-level detail that OWASP identifies at the category level.

OWASP Description Primary Secondary
LLM01 Prompt Injection T1, T2 T3, T9
LLM02 Sensitive Information Disclosure T10 T7
LLM03 Supply Chain Vulnerabilities T13 T14
LLM04 Data and Model Poisoning T6 T12
LLM05 Improper Output Handling T7 T8
LLM06 Excessive Agency T11 T5
LLM07 System Prompt Leakage T1 T4
LLM08 Vector and Embedding Weaknesses T12 T10
LLM09 Misinformation T8 T15
LLM10 Unbounded Consumption T14 T5

MITRE ATLAS v4.6.0

ATLAS v4.6.0 added 14 new agentic AI techniques, bringing the total to 15 tactics, 66 techniques, and 46 sub-techniques. AATMF is designed to be complementary — ATLAS provides breadth across the ML lifecycle; AATMF provides depth on adversarial attack techniques with executable procedures.

Comparison MITRE ATLAS AATMF v3
Tactics 15 15
Techniques 66 240
Sub-techniques 46
Attack procedures 2,152+
Prompts 4,980+
Risk scoring No Yes (AATMF-R v3)

NIST AI RMF / Cyber AI Profile (IR 8596)

The preliminary draft (December 2025) establishes control overlays for AI systems. AATMF maps directly to NIST functions.

NIST Function AATMF Coverage
GOVERN Volume VI (Risk Management, Compliance Mapping, Training)
MAP Volume I (Architecture), Volume VI (Risk Management)
MEASURE Volume I (AATMF-R v3), Volume V (Detection Engineering)
MANAGE Volume V (Mitigation, IR, Red/Blue Team)

EU AI Act

Full high-risk requirements (August 2026) mandate conformity assessments that require threat modeling. AATMF provides the technical depth to satisfy these requirements.

Timeline

February 2, 2025 Prohibited practices effective T8 (social scoring, manipulation), T15 (biometric categorization)
August 2, 2025 GPAI obligations T6 (training data), T13 (supply chain transparency)
August 2, 2026 Full high-risk requirements All tactics — conformity assessment requires threat modeling

AATMF Coverage for EU AI Act Compliance

EU AI Act Requirement AATMF Mapping
Risk management system (Art. 9) AATMF-R v3 scoring, Volume VI
Data governance (Art. 10) T6, T12 detection and mitigation
Technical documentation (Art. 11) Full framework documentation
Transparency (Art. 13) T7, T8 output validation
Human oversight (Art. 14) T15 human workflow controls
Robustness (Art. 15) T1–T5 resilience testing
Post-market monitoring (Art. 72) Volume V: Detection engineering

Training & Awareness Programs

Role-Based Training Matrix

Audience Focus Duration Frequency
Executive leadership AI risk landscape, AATMF overview, regulatory exposure 2 hours Quarterly
ML engineers T1–T6 techniques, secure training, model hardening 2 days Semi-annual
Application developers T1–T5, T11 (agentic), API security, prompt injection defense 1 day Semi-annual
Security operations Detection engineering, IR procedures, all tactics overview 2 days Semi-annual
Data scientists T6 (training poisoning), T12 (RAG), data provenance 1 day Annual
Product managers Risk assessment, compliance requirements, threat landscape 4 hours Annual
All staff AI security awareness, social engineering with AI, Shadow AI risks 1 hour Annual

Tabletop Exercise Scenarios

1 GTG-1002 Redux (Agentic Exploitation)

A developer reports that their AI coding assistant has been making unexpected network calls. Investigation reveals that a compromised MCP server has been redirecting the agent to exfiltrate source code. The attack has been active for approximately 72 hours.

Discussion Points

Detection gap analysis, containment procedures for agentic systems, MCP audit process, developer notification.

2 PoisonedRAG (Knowledge Base Manipulation)

Customer support reports that the AI assistant is providing incorrect information about product pricing and warranty terms. Analysis shows that 5 malicious documents were injected into the RAG knowledge base 2 weeks ago, affecting approximately 15% of queries.

Discussion Points

RAG integrity monitoring, customer notification, knowledge base rebuild, source authentication.

3 Supply Chain Compromise

A widely-used LoRA adapter on HuggingFace has been updated with a backdoor. Your team deployed this adapter 3 days ago in a fine-tuned model serving 50,000 daily users.

Discussion Points

Model artifact verification, rollback procedures, user impact assessment, responsible disclosure.

4 Policy Puppetry at Scale

Security monitoring detects a 500% increase in safety filter bypasses. Investigation reveals a new jailbreak technique (formatted as XML policy files) that bypasses all current input classifiers. The technique has been publicly shared on social media.

Discussion Points

Emergency filter updates, temporary service restrictions, public communication, patch timeline.

5 Deepfake Board Member

A board member received a video call from the "CFO" requesting approval for a $5M wire transfer. The call lasted 15 minutes and included realistic video and audio. The board member approved the transfer before verification.

Discussion Points

Multi-factor verification for financial decisions, deepfake detection capabilities, insurance coverage, incident response.