snailsploit[$]
frameworks / sef
sef · v1.0
human security
sesa scoring · v1
cc by-sa 4.0
framework · v1

social
engineering.

A systematic methodology for assessing organizational resilience against social engineering — structured threat modeling, quantitative scoring, and evidence-based remediation.

SEF applies adversarial psychology to human systems. The same trust reflexes that make AI vulnerable to prompt injection make humans vulnerable to social engineering. Same attack psychology — different substrate.

at a glance
techniques20
phases7
dimensions6
levers8
"The trust reflexes that make AI vulnerable to prompt injection make humans vulnerable to social engineering. Same attack psychology, different substrate."
— core thesis · sef v1
01 · gap model
Social engineering exploits the gap between security policy and human behavior. SEF identifies four gaps that exist in every organization.

the four gaps.

01
Knowledge Gap
what employees should know → what they actually know

Failed phishing simulations, policy violations, unreported incidents.

02
Behavior Gap
what employees know → what they actually do

People know not to reuse passwords, accept tailgaters, or plug in unknown USBs. They do it anyway. Knowledge alone doesn't change behavior.

03
Culture Gap
stated security values → actual organizational culture

When leadership bypasses controls, when security is treated as an obstacle, when there's no reporting culture — the gap widens regardless of policy.

04
Process Gap
designed processes → actual workflows

Shadow IT, workarounds, undocumented procedures. The real process is never the documented process.

02 · methodology
Seven structured phases from initial threat modeling through remediation and continuous improvement.

seven assessment phases.

idcodephase · description · deliverables
P1HLTM
Human Layer Threat Modeling

Systematic identification of human-centric attack surfaces before assessment begins. Map personnel with critical access, identify likely adversaries, assess susceptibility, model attack chains.

key personnel listaccess matrixthreat actor profilesattack trees w/ probability
P2GAP
Gap Analysis

Measure the four gaps (Knowledge, Behavior, Culture, Process) across the organization. Identify which gaps are widest and which are most exploitable by the threat actors identified in Phase 1.

gap matrixexploitability heatmap
P3OSINT
Organizational Intelligence

Gather the intelligence an attacker would use: org charts, public profiles, technology stack, vendor relationships, physical layout, communication patterns. OSINT applied to the human layer.

adversary view dossierpublic surface report
P4SCORE
SESA Scoring

Quantitative measurement across six dimensions (see below). Produces a baseline score and identifies the weakest dimensions.

dimensional scorecardbaseline & deltas
P5DESIGN
Operation Design

Design the assessment or red team operation. Select techniques from the taxonomy, map them to psychological levers, define scope and rules of engagement, build pretexts.

RoE documentpretext librarytechnique map
P6EXEC
Execution

Run the assessment. Two operational modes: Assessment Mode (controlled testing, low risk) or Operations Mode (full-scope red team, high risk).

evidence packagetimeline & TTPs
P7FIX
Remediation

Evidence-based remediation roadmap. Not 'do more training' — specific controls targeting the specific gaps and weaknesses identified in Phases 2–6.

control roadmappriority matrixretest plan
03 · sesa scoring
Social Engineering Susceptibility Assessment. Six weighted dimensions produce a 1–10 baseline score and identify the weakest dimensions.

sesa: six dimensions.

dimensionweightwhat it measuresweight bar
Security Awareness
1.2×

Organizational understanding of social engineering threats and recognition capability. How well can employees identify an attack in progress?

Process Maturity
1.0×

Formalization and enforcement of security procedures across the organization. Are there verification procedures? Are they followed?

Security Culture
1.1×

Integration of security mindset into organizational values and daily operations. Do people report? Does leadership model security behavior?

Technical Controls
0.9×

Technology-based defenses that reduce the social-engineering attack surface. Email filtering, MFA, URL scanning, physical access controls.

Incident Response
1.0×

Capability to detect, respond to, and recover from social-engineering attacks. When an employee clicks, what happens next?

Organizational Resilience
0.8×

Ability to maintain operations and recover from successful attacks. If an attacker gets in through a human, how far can they go?

total6.0×sum of weights
rating scale · 1–10
1.0 – 3.0BasicSignificant vulnerabilities. Immediate focus on awareness and process formalization.
3.1 – 5.0DevelopingSome controls in place but gaps remain. Targeted improvement needed.
5.1 – 7.0EstablishedSolid foundation with room for improvement in specific dimensions.
7.1 – 9.0AdvancedStrong posture. Focus on maintaining and adapting to emerging threats.
9.1 – 10.0OptimizedMature program with continuous improvement. Resilient to most attack types.
04 · taxonomy
MITRE-aligned categorization. 20 techniques across 4 categories, each mapped to psychological levers, IOCs, and mitigations.

technique taxonomy.

SEF-1005 techniques
Pretexting

Creation and deployment of fabricated scenarios to manipulate targets into divulging information or performing actions. The foundation of all social engineering — without a believable pretext, no technique works.

SEF-2005 techniques
Phishing Operations

Electronic communication-based attacks designed to harvest credentials, deploy malware, or manipulate behavior. Includes spear phishing, whaling, BEC, and smishing.

SEF-3005 techniques
Physical Operations

In-person social engineering targeting physical access controls and face-to-face interactions. Tailgating, impersonation, badge cloning, dumpster diving.

SEF-4005 techniques
Voice Operations

Telephone-based social engineering exploiting voice communication trust and real-time interaction pressure. Vishing, callback attacks, voice deepfake.

05 · psychology
Eight cognitive biases social engineers exploit. Every technique in the taxonomy maps to one or more of these levers.

eight psychological levers.

L#leverhow it's exploiteddefense
L1
Authority

Impersonating executives, law enforcement, IT administrators.

Verification procedures · out-of-band confirmation.

L2
Urgency

Artificial deadlines and crisis scenarios.

Pause procedures · escalation protocols.

L3
Trust

Impersonating known contacts · leveraging vendor relationships.

Verification for sensitive requests.

L4
Fear

Account suspension, job, or legal threats.

Reporting culture where escalation carries no penalty.

L5
Helpfulness

Requesting assistance with seemingly innocent tasks.

Awareness that helpfulness is a targeted vulnerability.

L6
Reciprocity

Small gifts or help before making the real request.

Awareness of reciprocity manipulation patterns.

L7
Social Proof

‘everyone else does this’ · ‘your colleagues approved’.

Independent verification.

L8
Scarcity

Limited time offers · exclusive access framing.

Pause before acting on scarcity claims.

06 · threat tiers
Understanding adversary capabilities calibrates defensive investments and assessment rigor.

four threat actor tiers.

T1
Opportunistic

Low-sophistication actors using widely available tools and techniques.

capability

Minimal resources · template-based · mass targeting.

T2
Organized Criminal

Professional criminal organizations with dedicated SE capabilities.

capability

Moderate resources · targeted campaigns · developed pretexts.

T3
Advanced Persistent

Sophisticated actors with long-term objectives and significant resources.

capability

Custom tooling · dedicated personnel · extended recon · multi-vector.

T4
Nation-State

State-sponsored actors with unlimited resources and strategic objectives.

capability

Full intel capabilities · years-long ops · cyber-physical convergence · insider placement.

07 · operations
SEF supports two operational modes based on objectives and risk tolerance.

two operational modes.

ASSESSMENTlow risk
Assessment Mode

Controlled testing to measure susceptibility without causing harm.

scope
  • + phishing simulations
  • + vishing assessments
  • + physical access testing
  • + OSINT analysis
deliverables
  • → SESA score
  • → gap analysis report
  • → remediation roadmap
  • → training recommendations
OPERATIONShigh risk
Operations Mode

Full-scope red team operations simulating real adversary behavior.

scope
  • + multi-vector campaigns
  • + physical intrusion
  • + objective achievement
  • + persistence testing
deliverables
  • → attack narrative
  • → compromise evidence
  • → detection gap analysis
  • → control effectiveness report
08 · hltm
Human Layer Threat Modeling — the foundation of SEF. Run before any assessment to identify human-centric attack surfaces.

human layer threat modeling.

S1
Asset Identification

Map personnel with access to critical systems, data, or decisions.

key personnel listaccess matrixvalue assessment
S2
Threat Mapping

Identify likely adversaries and their human-targeting capabilities.

threat actor profilescapability assessmentshistorical TTPs
S3
Vulnerability Analysis

Assess organizational susceptibility to social engineering.

cultural factorsprocess gapstraining deficiencies
S4
Attack Path Modeling

Design likely attack chains targeting human vulnerabilities.

attack treeskill chainssuccess probability
09 · the ai connection
SEF was designed alongside AATMF because the attack psychology is the same. Three frameworks, one principle: inherited vulnerabilities.

same psychology, different substrate.

The authority bias that makes an employee comply with a fake CEO email is the same authority bias that makes an LLM comply with a prompt framed as a system instruction. The urgency that bypasses human critical thinking is the same urgency that bypasses AI safety training.

download the complete sef

tactical blueprint.
worksheets included.

The full kit — threat matrices, SESA scoring worksheets, assessment checklists, and phase-by-phase implementation guide. Print-ready.

  • +Complete SESA scoring worksheets (XLSX + PDF)
  • +Technique taxonomy with mitigations
  • +Phase-by-phase implementation guide
  • +Threat actor response matrix
sef · v1.0 · kit
email
no spam. unsubscribe anytime. kit is CC BY-SA 4.0.
11 · author
Original framework. Designed for practitioners — not a literature review.

about the author.

portrait
Kai Aizen

Offensive security researcher specializing in adversarial AI, social engineering, and human-layer security. Creator of the AATMF and SEF frameworks; author of Adversarial Minds.

nvd contributor·aatmf creator·sef creator
more frameworks all frameworks →
AATMF →Adversarial AI threat modelingP.R.O.M.P.T →Compositional grammarClaude-Red →Skills libraryToolkit →LLM safety CLIPlaybook →Diagnostic methodology