A systematic methodology for assessing organizational resilience against social engineering — structured threat modeling, quantitative scoring, and evidence-based remediation.
SEF applies adversarial psychology to human systems. The same trust reflexes that make AI vulnerable to prompt injection make humans vulnerable to social engineering. Same attack psychology — different substrate.
"The trust reflexes that make AI vulnerable to prompt injection make humans vulnerable to social engineering. Same attack psychology, different substrate."— core thesis · sef v1
Failed phishing simulations, policy violations, unreported incidents.
People know not to reuse passwords, accept tailgaters, or plug in unknown USBs. They do it anyway. Knowledge alone doesn't change behavior.
When leadership bypasses controls, when security is treated as an obstacle, when there's no reporting culture — the gap widens regardless of policy.
Shadow IT, workarounds, undocumented procedures. The real process is never the documented process.
Systematic identification of human-centric attack surfaces before assessment begins. Map personnel with critical access, identify likely adversaries, assess susceptibility, model attack chains.
Measure the four gaps (Knowledge, Behavior, Culture, Process) across the organization. Identify which gaps are widest and which are most exploitable by the threat actors identified in Phase 1.
Gather the intelligence an attacker would use: org charts, public profiles, technology stack, vendor relationships, physical layout, communication patterns. OSINT applied to the human layer.
Quantitative measurement across six dimensions (see below). Produces a baseline score and identifies the weakest dimensions.
Design the assessment or red team operation. Select techniques from the taxonomy, map them to psychological levers, define scope and rules of engagement, build pretexts.
Run the assessment. Two operational modes: Assessment Mode (controlled testing, low risk) or Operations Mode (full-scope red team, high risk).
Evidence-based remediation roadmap. Not 'do more training' — specific controls targeting the specific gaps and weaknesses identified in Phases 2–6.
Organizational understanding of social engineering threats and recognition capability. How well can employees identify an attack in progress?
Formalization and enforcement of security procedures across the organization. Are there verification procedures? Are they followed?
Integration of security mindset into organizational values and daily operations. Do people report? Does leadership model security behavior?
Technology-based defenses that reduce the social-engineering attack surface. Email filtering, MFA, URL scanning, physical access controls.
Capability to detect, respond to, and recover from social-engineering attacks. When an employee clicks, what happens next?
Ability to maintain operations and recover from successful attacks. If an attacker gets in through a human, how far can they go?
Creation and deployment of fabricated scenarios to manipulate targets into divulging information or performing actions. The foundation of all social engineering — without a believable pretext, no technique works.
Electronic communication-based attacks designed to harvest credentials, deploy malware, or manipulate behavior. Includes spear phishing, whaling, BEC, and smishing.
In-person social engineering targeting physical access controls and face-to-face interactions. Tailgating, impersonation, badge cloning, dumpster diving.
Telephone-based social engineering exploiting voice communication trust and real-time interaction pressure. Vishing, callback attacks, voice deepfake.
Impersonating executives, law enforcement, IT administrators.
Verification procedures · out-of-band confirmation.
Artificial deadlines and crisis scenarios.
Pause procedures · escalation protocols.
Impersonating known contacts · leveraging vendor relationships.
Verification for sensitive requests.
Account suspension, job, or legal threats.
Reporting culture where escalation carries no penalty.
Requesting assistance with seemingly innocent tasks.
Awareness that helpfulness is a targeted vulnerability.
Small gifts or help before making the real request.
Awareness of reciprocity manipulation patterns.
‘everyone else does this’ · ‘your colleagues approved’.
Independent verification.
Limited time offers · exclusive access framing.
Pause before acting on scarcity claims.
Low-sophistication actors using widely available tools and techniques.
Minimal resources · template-based · mass targeting.
Professional criminal organizations with dedicated SE capabilities.
Moderate resources · targeted campaigns · developed pretexts.
Sophisticated actors with long-term objectives and significant resources.
Custom tooling · dedicated personnel · extended recon · multi-vector.
State-sponsored actors with unlimited resources and strategic objectives.
Full intel capabilities · years-long ops · cyber-physical convergence · insider placement.
Controlled testing to measure susceptibility without causing harm.
Full-scope red team operations simulating real adversary behavior.
Map personnel with access to critical systems, data, or decisions.
Identify likely adversaries and their human-targeting capabilities.
Assess organizational susceptibility to social engineering.
Design likely attack chains targeting human vulnerabilities.
The authority bias that makes an employee comply with a fake CEO email is the same authority bias that makes an LLM comply with a prompt framed as a system instruction. The urgency that bypasses human critical thinking is the same urgency that bypasses AI safety training.
The full kit — threat matrices, SESA scoring worksheets, assessment checklists, and phase-by-phase implementation guide. Print-ready.