Skip to main content
Menu
Adversarial Psychology

Security Frameworks

Three frameworks, one foundation

Core Principle

Inherited Vulnerabilities

Every security framework I've built starts from a single observation: LLMs exhibit the same trust reflexes as humans because they learned from human-generated data.

This isn't metaphor. It's a consequence of how these systems are trained.

Large language models learn from billions of documents capturing how humans communicate, persuade, comply, and resist. When an LLM encounters a request framed as authority, it tends to respond the way humans respond to authority. When it sees social proof, urgency, or reciprocity, it activates compliance patterns that social engineers have exploited in humans for decades.

The implication reshapes how we think about AI security: social engineering and prompt injection aren't merely analogous — they're the same attack class, executed against different substrates.

This observation unifies my research. Rather than treating AI security, social engineering, and adversarial communication as separate disciplines, I approach them as applications of one underlying principle: adversarial psychology operates independently of whether the target is carbon or silicon.

Integration

Why Unified Frameworks Matter

Security practitioners typically specialize. AI red teamers don't run phishing assessments. Social engineers don't audit RAG pipelines. But attackers don't observe these boundaries. A sophisticated adversary might chain a social engineering attack (compromise a developer's credentials) with a prompt injection attack (poison the knowledge base that developer accesses) with a traditional exploit (pivot from the AI system to infrastructure).

Unified frameworks enable unified defense. Understanding that authority exploitation operates similarly against humans and AI systems allows you to build defenses that address root causes rather than chasing attack variants.

Application

Working Together

AI-Powered Customer Service

Use AATMF for the LLM attack surface and SEF for the human operators who can override it.

Organizational Security Posture

Use SEF for the human layer, AATMF for AI systems, and P.R.O.M.P.T to structure engagement communications.

AI Safety Controls

Map AATMF techniques to SEF psychological vectors to understand which controls address root causes versus symptoms.