Kai Aizen
Creator of AATMF • Author of Adversarial Minds • NVD Contributor
About Me
I'm Kai Aizen—security researcher, framework creator, and the mind behind "The Jailbreak Chef."
I came up through the trenches of traditional offensive security. Cloud penetration testing. Web application assessments. And more WordPress security audits than I'd like to admit—digging through plugin code, hunting for SQLi, chasing authentication bypasses in codebases held together with duct tape and prayer. That work gave me six CVEs and taught me how systems actually break. Not in theory. In production.
But my foundation isn't purely technical. I studied political science and psychology—the mechanics of influence, persuasion, and manipulation. I wrote a book about it. For years I thought those fields had nothing to do with security. I was wrong.
When I pivoted to AI security, everything clicked. The same principles that make social engineering work against humans work against language models. Jailbreaking isn't just clever prompting—it's exploiting how systems process trust, context, and authority. Prompt injection isn't a bug class; it's a persuasion problem with technical consequences. The dynamics that let an attacker bypass a help desk can bypass a guardrail.
That insight became my edge.
Today I specialize in adversarial AI: LLM vulnerabilities, agentic AI attack surfaces, and systematic threat modeling. I created the AATMF framework—now on OWASP's 2026 GenAI Security roadmap—to bring structure to a field that desperately needs it. I build tools that automate what I've learned and publish research that bridges offense and defense.
I still break WordPress plugins for fun. Old habits.
Core Research Thesis
"LLMs exhibit the same trust reflexes as humans because they learned from human-generated data."
Large language models didn't just learn grammar — they absorbed the social dynamics encoded in how we communicate. Authority, reciprocity, social proof, urgency — the psychological levers that social engineers have exploited for decades — function similarly in AI because those patterns saturate the training data.
Social engineering and prompt injection aren't merely analogous. They're the same attack class, executed against different substrates. I call this "inherited vulnerabilities" — AI systems inherited human trust patterns along with human language.
AATMF
Adversarial psychology vs AI
SEF
Adversarial psychology vs humans
P.R.O.M.P.T
Adversarial psychology vs communication
Three applications of one underlying principle.
Credentials & Recognition
NVD Contributor
Discovered and responsibly disclosed 6 WordPress plugin vulnerabilities
Framework Creator
Developed AATMF (Adversarial AI Threat Modeling Framework) and P.R.O.M.P.T methodologies
Published Author
Author of "Adversarial Minds: The Anatomy of Social Engineering and the Psychology of Manipulation"
Magazine Contributor
Technical articles published in Hakin9 Magazine
Wordfence Researcher
Listed on Wordfence threat intelligence researcher registry
Research Focus
My research spans several key areas of AI and traditional security.
AI/LLM Security
Discovering novel jailbreak techniques including context manipulation and multi-turn attacks
Researching indirect prompt injection, custom instruction backdoors, and MCP vulnerabilities
Analyzing attack surfaces in RAG systems and AI agent architectures
Exploring context window poisoning and persistent AI attacks
Traditional Security
6 CVE disclosures in WordPress plugin ecosystem
Container escape techniques and runtime attestation
Cloud vulnerability exploitation and defense
Endpoint detection bypass techniques
My CVE Portfolio
6 vulnerabilities responsibly disclosed in WordPress plugins, all documented on NVD and MITRE.
CVE-2025-9776
SQL Injection in CatFolders
CVSS 6.4CVE-2025-12163
Stored XSS in OmniPress
CVSS 5.3CVE-2025-11171
Missing Auth in Chartify
CVSS 5.3CVE-2025-11174
Info Exposure in Document Library Lite
CVSS 4.3CVE-2025-12030
IDOR in ACF to REST API
CVSS 4.3CVE-2026-1208
CSRF in Friendly Functions for Welcart
Notable Research & Publications
Context Inheritance Exploit
Discovered that jailbroken states persist across GPT sessions
Custom Instruction Backdoor
Uncovered emergent prompt injection via ChatGPT settings
MCP Security Analysis
Threat analysis of Model Context Protocol attack surfaces
Memory Manipulation Attacks
Research on poisoning AI context windows
Security Frameworks
AATMF
Systematic methodology for threat modeling AI systems. 20 tactics, 240+ techniques, quantitative risk scoring.
Learn more → CommunicationP.R.O.M.P.T
Systematic approach to prompt engineering covering Purpose, Results, Obstacles, Mindset, Preferences, and Technical considerations.
Learn more → Human SecuritySEF
Structured methodology for social engineering assessments combining psychology principles with practical attack simulations.
Learn more →Publications & Profiles
Security Magazines
Weaponization in the Cloud: Unmasking the Threats and Tools
Design Your Penetration Testing Setup
Research Platforms
In-depth security research articles and threat analysis
Open-source security tools and research projects
Industry Recognition
Security Researcher Profile — CVE discovery history and vulnerability research registry
6 CVE disclosures in WordPress plugin ecosystem with full vulnerability analysis
Connect With Me
Interested in discussing AI security research, collaboration, or speaking engagements.