Writing
Original research, analysis, and commentary on AI security, adversarial psychology, and infrastructure threats
AI Security
Research on LLM vulnerabilities, adversarial AI, and the intersection of human and machine trust.
The LLM Red Teamer's Playbook
Systematic methodology for diagnosing LLM defense layers and selecting bypass techniques — not another payload list.
AI Coding Agent Attack Surface
How AI coding assistants expand the attack surface through tool access, code execution, and supply chain trust.
Computational Countertransference
When AI systems mirror human emotional patterns — implications for manipulation and safety.
Agentic AI Threat Landscape
The emerging threat landscape of autonomous AI agents — from prompt injection to multi-agent compromise.
AATMF vs MITRE ATLAS
Side-by-side comparison of the two leading AI threat modeling frameworks — where they overlap and where they diverge.
AI Gateway Threat Model
Threat modeling AI gateways as critical infrastructure — attack paths and defense strategies.
The AI Breach Detection Gap
Why traditional detection fails for AI-specific breaches and what to do about it.
RCE & DNS Exfiltration in ChatGPT Canvas
Python Pickle RCE and DNS exfiltration in ChatGPT's Code Interpreter sandbox.
RAG, Agentic AI, and the New Attack Surface
How retrieval-augmented generation and autonomous agents create new vulnerability classes.
AI Social Engineering: Deepfake Voice Detection
How AI enables sophisticated social engineering through deepfake voices. Detection techniques and defense.
The Structural Vulnerabilities of Large Language Models
Tokenization evasion, parsing limits, and alignment failure modes in production AI systems.
Hidden Risks of AI: An Offensive Security Perspective
Emerging AI threat vectors from an offensive security perspective that defenders often miss.
Jailbreaking
Techniques for bypassing AI safety guardrails through psychological vectors.
Memory Manipulation: Poisoning AI Context Windows
Persistent manipulation attacks against conversational AI memory systems.
Inherent AI Vulnerabilities: Technical Deep Dive
Structural vulnerabilities in AI systems — why certain attacks succeed regardless of safety measures.
Context Inheritance Exploit: Persistent Jailbreaks
Jailbroken states persisting across GPT sessions through context inheritance.
Is AI Inherently Vulnerable? An Offensive Analysis
Fundamental security limitations of large language models from an adversarial perspective.
How I Jailbroke ChatGPT Using Context Manipulation
Step-by-step walkthrough using context and social awareness techniques.
Prompt Injection
Inserting malicious instructions into AI input to override system behavior.
MCP Security Deep Dive: Real-World Vulns Exposed
Deep security analysis of MCP protocol vulnerabilities in production environments.
MCP Threat Analysis: Protocol Vulnerabilities
Threat analysis of the Model Context Protocol attack surface and defense strategies.
Custom Instruction Backdoor: ChatGPT Prompt Injection
Emergent prompt injection risks through ChatGPT custom instructions.
Security Research
Infrastructure security, cloud exploitation, container security, and vulnerability research.
AI-Powered Obfuscator Bypasses Detection in 2 Hours
Building a cloud-based obfuscator using AI that bypasses security detection.
Zero-Trust Container Runtime Attestation
Implementing zero-trust principles in container runtime environments.
Advanced Container Escapes: Security Deep Dive
Deep technical analysis of container escape techniques and prevention.
Evading Endpoint Detection and Response (EDR)
How attackers bypass endpoint security and how to improve detection.
Exploiting Cloud Vulnerabilities: Tools and Techniques
Practical guide to cloud security testing — tools, techniques, and misconfigurations.
Opinion & Analysis
Strategic perspectives on security, AI adoption, and the threat landscape.
Featured Publications
Frequently Asked Questions
Where else can I read your work? ▼
My writing appears in Hakin9 Magazine, PenTest Magazine, eForensics, and on Medium. I also maintain TheJailbreakChef.com for AI security content.
Do you accept guest posts or collaborations? ▼
I'm open to collaborations on security research and writing projects. Reach out via LinkedIn to discuss opportunities.
How can I stay updated on new articles? ▼
Follow me on LinkedIn for announcements, or check back here periodically. Major research is also shared through security community channels.