Security Research
Vulnerability discovery & methodology
Vulnerability Research Methodology
I find vulnerabilities by looking for trust assumptions.
Every system — software, human, AI — operates on assumptions about what inputs are valid, what users are authorized, and what data is trustworthy. Vulnerabilities exist where those assumptions don't hold.
This sounds straightforward, but it changes how you approach research. Instead of running automated scanners and triaging outputs, I ask: What does this system trust? Why? What happens when that trust is misplaced?
For WordPress plugins, that means examining how they handle user input, how they verify authorization, and where they assume database-sourced data is safe. For container security, it means questioning what the runtime trusts about the images it executes. For AI systems, it means testing what the model trusts about the prompts it receives.
The methodology stays consistent. The substrates vary.
CVE Portfolio
Responsibly disclosed vulnerabilities in WordPress plugins, documented on NVD and MITRE.
CVE-2025-9776
Insufficient input sanitization in CatFolders folder management queries.
CVSS 6.4 — Stored XSSCVE-2025-12163
User input rendered without escaping in OmniPress admin context.
CVSS 5.3 — Missing AuthorizationCVE-2025-11171
Administrative functions accessible without capability checks in Chartify.
CVSS 5.3 — Information ExposureCVE-2025-11174
Sensitive metadata leaked through Document Library Lite API endpoints.
CVSS 4.3 — IDORCVE-2025-12030
Unauthorized access to restricted field data in ACF to REST API via predictable references.
CVSS 4.3 — CSRFCVE-2026-1208
Missing nonce verification in Friendly Functions for Welcart allows forged requests.
Security Research Focus Areas
WordPress Security
The WordPress ecosystem powers over 40% of the web. My research focuses on common vulnerability patterns in plugin architecture: authorization bypasses, injection flaws, and insecure data handling.
Explore WordPress Research → Adversarial AIAI Security
LLM jailbreaking, prompt injection, agentic AI vulnerabilities, and the psychology that makes AI systems exploitable. Testing machine trust reflexes.
Explore AI Security Research → Isolation & EscapeContainer Security
Containers promised isolation. Reality delivered attack surface. Runtime vulnerabilities, escape techniques, and the gap between security assumptions and actual isolation guarantees.
Cloud Security
Cloud environments introduce complexity that creates vulnerability. Exploitation patterns in AWS, Azure, and GCP — particularly where service integrations create unexpected trust relationships.
Explore Cloud Research →Disclosure and Testing Methodology
Identify Trust
What does the system believe about its inputs, users, and environment?
Map Surface
Where can an attacker influence those trusted inputs?
Test Boundaries
What happens when trust is violated at the edges?
Verify Impact
Can the violation produce meaningful impact?
Disclose
Responsible disclosure with complete technical detail.
This methodology applies whether auditing a WordPress plugin or probing an AI system. Find the trust, test the trust, document what breaks.