Security Frameworks
Three frameworks, one foundation
Inherited Vulnerabilities
Every security framework I've built starts from a single observation: LLMs exhibit the same trust reflexes as humans because they learned from human-generated data.
This isn't metaphor. It's a consequence of how these systems are trained.
Large language models learn from billions of documents capturing how humans communicate, persuade, comply, and resist. When an LLM encounters a request framed as authority, it tends to respond the way humans respond to authority. When it sees social proof, urgency, or reciprocity, it activates compliance patterns that social engineers have exploited in humans for decades.
The implication reshapes how we think about AI security: social engineering and prompt injection aren't merely analogous — they're the same attack class, executed against different substrates.
This observation unifies my research. Rather than treating AI security, social engineering, and adversarial communication as separate disciplines, I approach them as applications of one underlying principle: adversarial psychology operates independently of whether the target is carbon or silicon.
Three Frameworks, One Foundation
AATMF
Adversarial AI Threat Modeling Framework — applies adversarial psychology to machine systems. 20 tactics, 240+ techniques, quantitative risk scoring.
SEF
Social Engineering Framework — applies adversarial psychology to human systems. Structured methodology for human-factor security assessment and organizational resilience testing.
P.R.O.M.P.T
Adversarial Communication Framework — applies adversarial psychology to communication itself. Purpose, Results, Obstacles, Mindset, Preferences, Technical.
Why Unified Frameworks Matter
Security practitioners typically specialize. AI red teamers don't run phishing assessments. Social engineers don't audit RAG pipelines. But attackers don't observe these boundaries. A sophisticated adversary might chain a social engineering attack (compromise a developer's credentials) with a prompt injection attack (poison the knowledge base that developer accesses) with a traditional exploit (pivot from the AI system to infrastructure).
Unified frameworks enable unified defense. Understanding that authority exploitation operates similarly against humans and AI systems allows you to build defenses that address root causes rather than chasing attack variants.
Working Together
AI-Powered Customer Service
Use AATMF for the LLM attack surface and SEF for the human operators who can override it.
Organizational Security Posture
Use SEF for the human layer, AATMF for AI systems, and P.R.O.M.P.T to structure engagement communications.
AI Safety Controls
Map AATMF techniques to SEF psychological vectors to understand which controls address root causes versus symptoms.
Getting Started
Explore AATMF →
Start with key concepts and the foundational articles.
Security practitioner?Explore SEF →
Jump to the assessment methodology.
Interested in theory?Read Adversarial Minds →
The deep dive on the psychology behind all three.
Hands-on?TheJailBreakChef Engine →
Apply all three frameworks interactively in real-time.