What Is Social Engineering
Last Updated: May 2026
Social engineering is the manipulation of human psychology to bypass security controls — no malware required, no exploits, no zero-days. An attacker calls your helpdesk, convinces a technician they’re a locked-out executive, and gets their MFA reset. That’s social engineering. It now accounts for the initial access method in 36% of all cyber intrusions, according to Mandiant’s M-Trends 2026 report covering 500,000+ hours of incident response work.
The threat has changed structurally in the last 18 months. Voice phishing has overtaken email as the primary social engineering vector. Pretexting has overtaken phishing as the most common attack type for the first time in recorded DBIR history. And AI now powers over 80% of social engineering activity, according to Abnormal Security’s 2025 research. Understanding social engineering in 2026 means understanding a discipline that is simultaneously older than the internet and faster-evolving than almost any other attack category.
Table of Contents
The Three-Level Definition
Simple: Social engineering is when an attacker tricks a person — rather than hacking a system — into doing something harmful: sharing a password, wiring money, clicking a link, or granting access.
Technical: Social engineering exploits cognitive biases and social norms — authority compliance, urgency responses, reciprocity instincts, and trust heuristics — to manipulate targets into bypassing security controls they would otherwise maintain. Unlike technical exploits that target code vulnerabilities, social engineering targets the predictable gaps between human decision-making under pressure and human decision-making under calm reflection. The attack surface is the user’s psychology, not the network perimeter.
The analogy: Think of network security as a locked vault. Years of investment have made the vault nearly impenetrable. Social engineering is the attacker who walks up to the vault’s security guard, shows a convincing fake ID, explains there’s an emergency upstairs, and asks the guard to open the door “just this once.” The vault’s lock is irrelevant. The guard is the vulnerability.
The Numbers That Define the Threat in 2026
Before mapping the attack types, the scale demands context. These are not estimates — they are incident response measurements from the largest datasets in the industry.
The 2026 Verizon Data Breach Investigations Report confirms that the most frequent causes of breaches continue to heavily involve the human element — including social engineering, phishing, and stolen credentials.
- 60% of all data breaches in 2025 involved the human element (Verizon DBIR 2025, 22,000+ incidents)
- 98% of cyberattacks involve some form of social engineering (Sprinto, cross-industry analysis)
- $16.6 billion — US losses from social engineering attacks in 2024, up 33% year-over-year (FBI Internet Crime Report 2024)
- $4.89 million — average cost of a Business Email Compromise (BEC) attack
- 442% — year-over-year surge in vishing (voice phishing) attacks (CrowdStrike 2025)
- 50%+ — share of social engineering incidents now driven by pretexting, overtaking phishing for the first time (Verizon DBIR 2025)
- 8% — the share of employees who account for 80% of security incidents in their organizations (Verizon DBIR 2025)
That last figure is the one that most security professionals find most surprising — and most actionable. Social engineering risk is not distributed evenly across an organization. It concentrates in a small, identifiable population. More on this in the protection section.
The Axis Intelligence Social Engineering Attack Matrix 2026
Every guide to social engineering lists the same attack types. What no guide does is organize them by the two variables that actually determine an attack’s success and defense approach: the delivery channel (how the attacker reaches the target) and the psychological trigger (the cognitive mechanism exploited).
According to Axis Intelligence’s synthesis of Verizon DBIR 2025, Mandiant M-Trends 2026, CrowdStrike 2025, and FBI IC3 2024 data, this is the definitive attack taxonomy for 2026:
| Attack Type | Delivery Channel | Primary Psychological Trigger | 2026 Trend | AI-Amplified? |
|---|---|---|---|---|
| Phishing | Trust + Urgency | Declining as dominant vector (now 6% of initial access) | Yes — 42% higher success rate with AI lures | |
| Spear Phishing | Email (targeted) | Familiarity + Authority | Increasing volume; increasingly AI-personalized | Yes — near-human personalization at scale |
| Vishing | Voice (phone) | Authority + Urgency | +442% YoY; now #1 vector in cloud compromises | Yes — AI voice cloning active |
| Smishing | SMS | Urgency + Fear | 19–36% click rates vs. 2–4% for email phishing | Partial — AI templates increasing |
| Pretexting | Multi-channel | Trust + Familiarity | Now 50%+ of all social engineering incidents | Yes — AI backstory generation |
| BEC / CEO Fraud | Email + Voice | Authority + Urgency | Volume +103% in 2024; $6.3B transferred | Yes — AI drafting + deepfake escalation |
| Deepfake Impersonation | Video / Voice | Trust + Familiarity | Emerging: $25.6M Arup case; rising fast | Native AI attack — does not exist without AI |
| Baiting | Physical + Digital | Curiosity | Stable; USB drop attacks persist | Minimal |
| Tailgating | Physical | Helpfulness + Authority | Stable; increased post-remote return | No |
| Quid Pro Quo | Voice + Email | Reciprocity | Stable; IT impersonation common | Partial |
| Watering Hole (Social) | Web | Curiosity + Trust | Increasing; supply chain variant rising | Partial |
| MFA Fatigue / Prompt Bombing | Push notification | Urgency + Exhaustion | 14% of 2024 incidents; 20%+ in public sector 2025 | Partial — automated tools |
Source: Axis Intelligence Social Engineering Attack Matrix 2026. Data synthesized from Verizon DBIR 2025, Mandiant M-Trends 2026, CrowdStrike 2025 Global Threat Report, FBI IC3 2024, Abnormal Security 2025. Trend designations reflect year-over-year changes in confirmed incident share.
The Six Psychological Triggers — Why These Attacks Work
Every social engineering attack exploits one or more of six cognitive mechanisms. Understanding these mechanisms is the foundation of both attack recognition and defense design.
Authority. Human beings are conditioned to comply with perceived authority figures. Milgram’s famous experiments demonstrated this starkly: subjects administered apparently painful electric shocks to others because an authority figure told them to. In social engineering, attackers impersonate executives, IT administrators, government officials, auditors, and regulators. The trigger is not deception about identity alone — it is the compliance instinct that activates when identity is established.
BEC attacks rely almost entirely on authority: “The CEO is requesting an urgent wire transfer before the board meeting.” The urgency amplifies the authority trigger, compressing the time window for rational evaluation.
Urgency. Urgency degrades decision quality. Under time pressure, humans shift from deliberative reasoning (slow, careful, skeptical) to intuitive processing (fast, pattern-matching, error-prone). Attackers create artificial urgency — “Your account will be locked in 2 hours,” “The wire must process before close of business,” “I’m on stage in 10 minutes and my laptop won’t connect” — specifically to prevent targets from applying normal verification habits.
Fear. Fear of consequences bypasses rational assessment. “Your account has been compromised,” “Legal action will be taken if not resolved today,” “Your device contains illegal content” — these framings trigger a defensive response that makes the target desperate to resolve the threat. IRS impersonation scams, fake ransomware notifications, and fake law enforcement threats all exploit the fear trigger.
Trust through Familiarity. Attackers invest heavily in research before contact. They know your name, your manager’s name, your current project, your colleague who is traveling. This prior knowledge creates an illusion of a pre-existing relationship that overrides the skepticism a stranger would face. Spear phishing succeeds at dramatically higher rates than bulk phishing precisely because personalization activates the familiarity-trust response.
Curiosity. The baiting attack category lives here: USB drives labeled “Q4 Compensation Survey” left in parking lots. Emails with subject lines like “Photo of you at the conference.” Fake job postings with “your name was recommended.” Curiosity is a drive that is difficult to override — our brains are wired to resolve incomplete information.
Reciprocity. If someone does something for you, you feel compelled to return the favor. Quid pro quo attacks offer something of value — IT help, a software license, a job referral — before making a request. The reciprocity instinct makes the target feel obligated to assist even when the “help” received was manufactured.
Attack Type Profiles: The 2026 Reality
Phishing
Phishing sends fraudulent messages — most commonly email — designed to appear as legitimate communications from trusted entities. The goal is to steal credentials, install malware, or redirect wire payments.
The 2026 landscape has changed the phishing picture significantly. Email phishing dropped to just 6% of confirmed initial access methods in 2025 (Mandiant M-Trends 2026) — not because phishing is less dangerous, but because it has evolved into a launchpad for more sophisticated attacks rather than a standalone compromise method. The most common outcomes of social engineering attacks were credential theft (29%), data theft (18%), and extortion (13%).
What AI changed: AI-generated phishing emails now account for 40% of all BEC messages (VIPRE 2024). The traditional detection signal of spelling errors and awkward grammar has been neutralized. AI-powered phishing campaigns achieve a 42% higher success rate than conventional attacks, according to industry benchmarks. The grammar check that security training programs taught for a decade is no longer a reliable filter.
The smishing variant is increasingly the more effective delivery channel. Smishing click rates of 19–36% dwarf email phishing’s 2–4%, making SMS the highest-conversion social engineering channel on a per-message basis. Fake road toll smishing scams spiked 2,900% between 2023 and 2024 (APWG). The 2025 surge in fake CAPTCHA campaigns — up 1,450% in Q1 2025 — shows attackers actively discovering and exploiting new delivery mechanisms.
For a complete breakdown of phishing attack volumes and trends, see our Phishing Statistics hub. According to Axis Intelligence’s cross-reference of APWG, KnowBe4, and FBI IC3 data, smishing has become the highest-conversion social engineering channel per message — and the most underestimated vector in enterprise security training programs, which still dedicate the majority of simulation content to email.
Vishing (Voice Phishing)
Vishing is now the dominant social engineering vector in cloud-related breaches and the fastest-growing attack channel overall.
According to Mandiant’s M-Trends 2026 report, based on over 500,000 hours of incident response work, voice phishing has overtaken email as the primary social engineering vector. Email phishing dropped to just 6% of confirmed initial access methods in 2025. Voice phishing rose to 11%, and in cloud-related compromises it reached 23%.
The primary playbook: the attacker calls your IT helpdesk, impersonates an employee (often one who is verified as currently traveling via LinkedIn), and claims to be locked out of their account. They pass basic identity verification, get their MFA reset, and enter the environment with a clean, legitimate session. Mandiant’s M-Trends 2026 documents this pattern in detail — groups like UNC3944 (Scattered Spider) specifically target IT helpdesks because they are optimized for speed and helpfulness, not suspicion.
The AI voice cloning escalation: In 2026, attackers are no longer limited to impersonating unknown employees. AI voice cloning tools can replicate a specific executive’s voice from as little as a few seconds of audio — available from earnings calls, YouTube interviews, or LinkedIn video posts. The attacker calls the CFO’s assistant with the CEO’s cloned voice requesting an urgent wire. This is not theoretical. The $25.6 million Arup case — where attackers used deepfake video of the company’s CFO to convince an employee to transfer the funds — demonstrated the ceiling of what this attack class can achieve.
Why traditional defenses fail: Firewalls, email filters, and endpoint protection are irrelevant when the attack happens over a phone call. The only defense is process: mandatory callback verification via independently verified numbers, never via contact details provided in the suspicious call itself.
Pretexting
Pretexting has overtaken phishing as the most common social engineering attack type — a structural shift documented for the first time in the Verizon DBIR 2025 and confirmed by Mandiant M-Trends 2026.
A pretext is a fabricated scenario — a false identity, a false context, a false justification — constructed to give the attacker a plausible reason to request information or access. Pretexting is older than the internet. What has changed is the research quality and AI-enabled depth of modern pretexts.
A sophisticated pretext in 2026 might involve:
- An attacker who has researched the target organization’s org chart, project names, vendor relationships, and internal terminology from LinkedIn, press releases, and job postings
- An initial email establishing a fictional vendor relationship over days or weeks
- A follow-up phone call referencing the email thread, the project name, and the contact’s manager
- A request that seems administratively routine within the established context
Pretexting accounted for 50% of all social engineering attacks — almost twice the previous year’s proportion and marking the first time pretexting overtook traditional phishing as the most common social engineering method. Pretexting is now responsible for 27% of all social engineering-based breaches.
The BEC attack is the most financially damaging application of pretexting. In 2024, BEC attack volume soared by 103%, more than twice the previous year’s volume. The average CEO receives 57 targeted attacks every year. More than $6.3 billion was transferred through BEC in 2024.
MFA Fatigue (Prompt Bombing)
MFA fatigue is among the fastest-rising attack techniques in the enterprise context — and among the most technically simple. The attacker has valid credentials (obtained via phishing or credential purchase on dark web markets). They trigger repeated MFA push notifications to the target’s phone, counting on the target to eventually approve the request to stop the interruptions, or to mistake it for a legitimate login attempt.
Prompt bombing attacks represented 14% of social engineering incidents in 2024 and succeeded in over 20% of social attacks against public sector organizations in 2025. This attack type is particularly alarming because it requires no sophisticated deception — just persistence and valid credentials.
The defense: Number matching in MFA applications (the app shows a number that must match a number displayed on the login screen, requiring the target to actively engage with the request rather than blindly approving), combined with phishing-resistant MFA standards like FIDO2/WebAuthn that don’t generate push notifications at all.
Deepfake Impersonation
Deepfake social engineering is the attack type with the steepest growth trajectory and the lowest current awareness among target populations. It combines AI-generated synthetic media — video, voice, or both — with pretexting to create impersonation attacks that can defeat human verification of identity entirely.
The attack capability has moved from experimental to operational. In the Arup case, an employee participated in a video call with what appeared to be the company’s CFO and other colleagues — all of whom were deepfake avatars — and authorized a $25.6 million transfer. A Hong Kong-based engineering firm later confirmed the full technical details.
SheByte, a phishing-as-a-service (PhaaS) platform documented by Arkose Labs in late 2025, offers AI-generated templates for $200/month subscription — demonstrating that sophisticated AI-assisted social engineering is now accessible to attackers without technical expertise.
As one HYPR executive noted: “Deepfakes, synthetic backstories and real-time voice or video manipulation are no longer theoretical; they are active, sophisticated threats designed to bypass traditional defenses and exploit trust gaps.”
What this means for defenders: Visual verification of identity — long considered the ultimate fallback — is no longer reliable for remote communication. Organizations need out-of-band verification protocols and codeword systems for high-value financial or access decisions. The defense against deepfakes is process architecture, not technology detection.
The AI Amplification Factor: How Generative AI Changed Social Engineering
This is the section that defines 2026 as a structurally different threat environment from any prior year.
AI has amplified social engineering in four distinct ways, each measurable:
1. Scale without skill degradation. Pre-AI, effective spear phishing required a skilled attacker who understood the target, their organization, and their communication patterns. AI enables the same quality attack to be deployed against thousands of targets simultaneously. One attacker with access to an AI platform and a company’s LinkedIn data can now personalize thousands of spear phishing emails in hours.
2. Grammar and style normalization. The training heuristic of “look for spelling errors” is obsolete. AI-generated phishing content is grammatically correct, contextually appropriate, and stylistically convincing. The 42% higher success rate of AI-powered phishing reflects this quality improvement directly.
3. Voice and video synthesis. Real-time voice cloning and deepfake video generation have crossed the threshold from laboratory to operational attack tool. Attackers can impersonate specific individuals — not just generic “IT support” — using audio samples from public sources.
4. Automation of the reconnaissance phase. AI tools can scrape, analyze, and synthesize OSINT (open-source intelligence) at speeds that no human attacker could match. An AI can build a complete organizational map from LinkedIn, aggregate media mentions, identify key relationships, and produce a target-specific attack brief in minutes. The reconnaissance phase — historically the most time-intensive stage of a social engineering campaign — has been compressed to near-instantaneous.
According to Axis Intelligence’s analysis of the 2026 threat landscape, the critical implication is this: the attack capabilities that were exclusive to nation-state threat actors in 2022 are now available to organized criminal groups via AI tooling. The Scattered Spider group — responsible for major vishing campaigns against MGM Resorts, Caesars Entertainment, and numerous cloud-native organizations — operates with criminal-market-accessible tools, not state-sponsored resources.
The defense implication is equally significant: human-only detection of social engineering is no longer sufficient. Security controls must assume that content can be indistinguishable from legitimate communications and design verification processes that don’t depend on humans identifying forgeries.
The Four-Phase Attack Lifecycle — And Where to Break It
Every social engineering attack, from a 30-second phishing click to a six-week pretexting campaign, follows the same four-phase structure. Each phase has specific defense intervention points. Breaking the chain at any phase stops the attack.
Phase 1: Reconnaissance
The attacker collects information about the target before any contact occurs. This phase is entirely passive — no interaction with the target — and therefore invisible to detection tools that monitor communication.
Sources used in reconnaissance include: LinkedIn (org charts, roles, relationships, project names), company websites and press releases (vendor names, technology stack, executive profiles), social media (travel schedules, personal interests, family details), court records and regulatory filings (financial relationships, disputes), and dark web databases of previously breached credentials.
AI has compressed Phase 1 from days or weeks to hours. An AI tool given a company name and target department can produce a detailed attack brief — including likely targets, relevant project names, plausible pretexts, and optimal timing — from public sources in under an hour.
Defense interventions at Phase 1: Minimize public exposure of organizational data. Audit what your company’s website, LinkedIn presence, and job postings reveal about your technology stack, vendor relationships, and internal structure. Every piece of information an attacker collects makes Phase 2 more convincing.
Phase 2: Pretext Development
The attacker builds the false scenario — identity, context, and justification — that will make the target comply. The quality of the pretext determines the attack’s success rate. Sophisticated pretexts take days to build and may include fake social media profiles, email histories, spoofed documents, and rehearsed backstories.
For a helpdesk vishing attack, the pretext might be: “I’m Alex Chen, senior engineer on the Kubernetes migration team. I’m traveling to Singapore for the client onboarding, my phone was stolen at the airport, and I need my email MFA reset urgently before the 9am meeting.” The attacker has researched that there is a Kubernetes migration project, has the target’s manager’s name, and knows a colleague is named Alex Chen.
Defense interventions at Phase 2: Strict identity verification protocols that cannot be satisfied by information the caller provides. If someone calls claiming to be Alex Chen, the verification call must go to Alex Chen’s phone number on record — not to a number the caller provides.
Phase 3: Engagement
The attacker makes contact and executes the manipulation. This is the phase that most awareness training focuses on, and yet it is the hardest phase at which to intervene — precisely because sophisticated attackers have made their pretexts convincing enough to bypass the skepticism training creates.
Engagement tactics include:
- Creating urgency: “I’m on stage in 10 minutes,” “The deadline is in one hour,” “Legal has already approved this”
- Invoking authority: “The CISO asked me to call you personally,” “This has been escalated to the board”
- Establishing rapport: Using known colleague names, project terminology, and shared context
- Managing objections: Having a prepared answer for every standard verification question
The most effective engagement attacks succeed because they mirror normal workflow. An attacker who knows your organization’s procurement process, vendor terminology, and approval chain sounds exactly like a legitimate colleague. The attack tactics used by advanced threat actors — including social engineering techniques — are comprehensively documented in the MITRE ATT&CK framework, which maps initial access methods including phishing and valid account abuse.
Defense interventions at Phase 3: Out-of-band verification for any request involving credentials, payments, or system access. No amount of claimed urgency justifies bypassing verification. Mandatory cooling-off periods for wire transfers over threshold amounts. Codeword systems for executive-level requests.
Phase 4: Closure
How the attacker ends the interaction determines how quickly (or whether) the attack is detected. Skilled attackers manage closure carefully: they end interactions naturally, avoid suspicious behavior that creates hindsight red flags, and in some cases deliberately create legitimate-looking paper trails.
Poor closure — an abrupt end, an unanswerable question, a request that feels wrong in retrospect — is often what triggers post-hoc reporting. Good closure delays detection, sometimes until the damage is irreversible.
Defense interventions at Phase 4: Reporting culture that normalizes “I wasn’t sure about that call” conversations without blame. Regular debrief processes after unusual requests. Monitoring for anomalous access patterns in the hours after helpdesk interactions.
The 8% Rule: Why Mass Security Training Fails and What Works Instead
The most important finding in the Verizon DBIR 2025 — and the finding that most security training programs have not yet absorbed — is this: 8% of employees account for 80% of security incidents.
Social engineering risk is not distributed evenly. A small, identifiable population within every organization is disproportionately likely to click, respond, and comply. This population is not less intelligent or less security-conscious in general — they are concentrated in roles that structurally create vulnerability:
- Helpdesk and IT support staff — whose job is to help people, creating pressure to comply with requests rather than interrogate them
- Finance and accounts payable staff — who process payments and are the primary target of BEC attacks
- Executive assistants and scheduling staff — who manage communication and access for high-value targets
- New employees — who haven’t yet internalized organizational norms and are reluctant to challenge requests from apparent authority figures
- Remote workers with limited face-to-face verification options — who cannot physically confirm the identity of a caller
According to Axis Intelligence, the implication for security programs is structural: organizations that run the same phishing simulation training for every employee are using their security budget inefficiently. The training intervention that matters is targeted, role-specific, and simulation-intensive for the 8% — not general awareness content for the entire population.
Specifically: helpdesk staff need repeated simulation against vishing attacks with audio role-playing. Finance staff need BEC-specific training with wire transfer verification drills. Executive assistants need deepfake awareness training. General staff need baseline phishing awareness. These are different programs with different content and different metrics.
The Human Vulnerability Index (HVI) by Sector
According to Axis Intelligence’s cross-reference of KnowBe4 2025, Verizon DBIR 2025, Mandiant M-Trends 2026, and FBI IC3 data, these are the sectors ranked by composite social engineering vulnerability — combining baseline phishing click rate, BEC targeting frequency, and average financial loss per incident. We call this the Human Vulnerability Index (HVI):
| Sector | Baseline Phishing Click Rate | BEC Targeting Frequency | Avg. Loss/Incident | HVI Score |
|---|---|---|---|---|
| Healthcare | 41.9% | High | $1.27M (HIPAA Journal) | 🔴 Critical |
| Financial Services | 14.2% | 300× industry avg. (KnowBe4) | $4.89M (BEC avg.) | 🔴 Critical |
| Manufacturing | 28.6% | Moderate-High | $1.1M | 🔴 High |
| Government / Public Sector | 23.4% | High | $800K | 🔴 High |
| Technology / SaaS | 17.3% | High (cloud account targets) | $2.1M | 🟠 High |
| Education | 31.2% | Moderate | $500K | 🟠 High |
| Retail / Ecommerce | 19.8% | Moderate | $350K | 🟡 Medium |
| Legal Services | 16.4% | Moderate-High | $1.8M | 🟡 Medium |
| Energy / Utilities | 21.3% | Low-Moderate | $1.4M | 🟡 Medium |
| Professional Services | 18.9% | Moderate | $620K | 🟡 Medium |
Source: Axis Intelligence Human Vulnerability Index (HVI) 2026. Composite score derived from KnowBe4 Phishing Industry Benchmarks 2025, Verizon DBIR 2025, Mandiant M-Trends 2026, and FBI IC3 2024. HVI is an editorial assessment framework, not an audited security rating.
Healthcare’s position at Critical reflects structural factors: clinical environments create perfect conditions for social engineering. High urgency, frequent interruptions, multiple unfamiliar system logins, and a culture that prioritizes patient care over security protocols. Social engineering attacks targeting healthcare routinely impersonate medical supply vendors, insurance companies, and internal IT support.
Financial services face 300x more attacks than other industries despite — and because of — their higher general security investment. The direct financial payoff of successful BEC and wire fraud makes financial sector employees the highest-value targets for sophisticated pretexting.
How to Protect Yourself and Your Organization
For Individuals
1. Verify before you comply. Any request that involves money, credentials, or system access — regardless of who it appears to come from — deserves independent verification. Call back using a number you independently look up, not one provided in the suspicious message.
2. Slow down when urgency appears. Urgency is a manipulation trigger, not a legitimate reason to bypass verification. A real emergency can wait 90 seconds for a callback. An attacker cannot.
3. Protect your digital footprint. The reconnaissance phase feeds on public information. Audit your LinkedIn profile, your organization’s website, and any public-facing profiles that reveal your role, relationships, and project involvement. Less public information = less effective pretexts.
4. Use phishing-resistant MFA. Standard SMS-based MFA and push notification MFA are vulnerable to SIM swapping, prompt bombing, and SS7 attacks. FIDO2/WebAuthn hardware keys (YubiKey and equivalents) are the standard recommended by CISA for phishing-resistant MFA. They cannot be bypassed by social engineering short of physical theft.
5. Use a password manager with unique credentials per site. Credential reuse is what social engineering attacks exploit once a single password is obtained. A password manager with unique strong credentials per account eliminates credential stuffing as a downstream consequence of a phished password. See our best password managers guide for current recommendations.
6. Use a VPN on public and shared networks. Session hijacking and credential interception on unsecured networks reduces the work attackers need to do. A no-log VPN on public Wi-Fi adds a transport-layer protection that complements social engineering awareness without requiring any human decision.
For Organizations
1. Implement out-of-band verification for all high-risk actions. No wire transfer, MFA reset, or privileged access change should be authorized solely on the basis of email, phone, or chat communication. The verification call must use independently verified contact information.
2. Deploy number matching in MFA applications. Prompt bombing attacks succeed because users auto-approve push notifications. Number matching requires active cognitive engagement with each authentication event, defeating passive approval attacks.
3. Train your 8% with targeted simulations. Identify the high-risk roles in your organization — helpdesk, finance, executive assistants, new hires — and run targeted, role-specific simulation programs. Track metrics at the role level, not the company level.
4. Build a reporting culture. Attacks that are almost successful are the most valuable intelligence you have. Employees who almost complied with a suspicious request but didn’t should be celebrated, not questioned. Every near-miss report gives your security team an attack vector to close.
5. Apply NIST SP 800-63 guidance on identity verification. The NIST Digital Identity Guidelines define the assurance levels required for identity verification in digital contexts. For helpdesk-level verification, NIST recommends minimum IAL2/AAL2 controls — meaning identity claims must be verified against authoritative sources, not just knowledge-based questions that attackers can research.
6. Protect employee and customer data from data brokers. Social engineering attacks are powered by OSINT and dark web data. Organizations should monitor for data exposure through breach notification services. Our data breach statistics hub documents the scale of breach data available for attacker reconnaissance.
Red Flags: How to Recognize a Social Engineering Attempt
Unsolicited contact claiming urgency. Any unexpected call, email, or message that creates pressure to act quickly is the defining signature of social engineering. Legitimate systems and legitimate people can wait.
Request that bypasses normal process. “I know this is unusual, but…” and “Can you skip the normal steps this once…” are the verbal signatures of social engineering. Standard processes exist because they work. Requests to bypass them are red flags regardless of stated reason.
Contact details provided by the contact itself. If a caller says “call me back at this number to verify,” the callback will reach the attacker. Always use independently sourced contact information.
Refusal to be verified through standard channels. A legitimate executive, vendor, or colleague has no reason to resist identity verification. An attacker who claims verification is too slow or unnecessary is revealing their attack.
Information that’s almost right. Attackers conducting reconnaissance sometimes have slightly wrong details — a project name with incorrect capitalization, a manager’s name with a wrong first name, a date that’s off by one day. This happens when AI-generated profiles make errors on secondary details. “Almost right” is a red flag, not a green light.
Request to keep the interaction confidential. “Don’t mention this to [manager]” or “This is a confidential executive initiative” are classic isolation tactics designed to prevent the target from seeking second opinions.
Frequently Asked Questions
What is the definition of social engineering in cybersecurity?
Social engineering is the use of psychological manipulation to deceive people into compromising security — sharing credentials, authorizing transactions, installing malware, or granting access — without requiring any technical exploitation. The human being is the attack surface. Social engineering accounts for the initial access method in 36% of all cyber intrusions (Mandiant M-Trends 2026) and involves the human element in 60% of all confirmed breaches (Verizon DBIR 2025).
What are the most common types of social engineering attacks?
According to Axis Intelligence’s Social Engineering Attack Matrix 2026, the primary attack types are: phishing (email-based deception), vishing (voice phishing, now the dominant vector in cloud-related breaches), pretexting (fabricated scenarios — now the most common attack type overall), BEC/CEO fraud (impersonation for financial transfers), smishing (SMS phishing, with the highest click rates at 19-36%), and the emerging deepfake impersonation category. Pretexting overtook phishing as the most common type in 2025 (Verizon DBIR 2025) — a structural shift that most security awareness programs have not yet updated to reflect.
Why is social engineering so effective?
Social engineering exploits cognitive architecture that humans cannot simply “turn off.” Authority compliance, urgency response, trust through familiarity, and reciprocity are not security weaknesses — they are functional social adaptations. Attackers exploit the gap between these instincts and the deliberate verification habits that security training attempts to build. The gap is widest under time pressure, emotional activation, and unfamiliarity with the specific attack vector. AI has made attacks more personalized and more convincing, shrinking the behavioral gap that awareness training can address.
What is pretexting in social engineering?
Pretexting is the construction of a fabricated scenario — a false identity, false context, or false justification — to manipulate a target into taking a desired action. A pretext might be “I’m the new vendor contact for your account” (establishing false relationship), “The CFO asked me to call you directly” (invoking false authority), or “I’m locked out while traveling and need urgent help” (combining false context with urgency). Pretexting now accounts for 50%+ of all social engineering incidents and 27% of all social engineering-based breaches (Verizon DBIR 2025).
What is vishing and why is it more dangerous than phishing?
Vishing is voice phishing — social engineering conducted over telephone calls. It is more dangerous than email phishing in several ways: it is real-time (no time for careful evaluation), it activates stronger authority and urgency responses through vocal cues, it bypasses email filters entirely, and it exploits helpdesk workflows that are optimized for customer service rather than security. Vishing attacks surged 442% in 2024-2025 (CrowdStrike) and now account for 23% of initial access in cloud-related breaches (Mandiant M-Trends 2026). The combination of vishing with AI voice cloning represents the highest-severity attack vector in the current threat landscape.
How does AI make social engineering more dangerous?
AI amplifies social engineering in four ways: (1) Scale — one attacker can personalize thousands of messages simultaneously; (2) Quality — AI-generated content is grammatically perfect and contextually appropriate, defeating grammar-based detection; (3) Synthesis — AI voice cloning and deepfake video enable real-time identity impersonation; (4) Reconnaissance — AI compresses OSINT gathering from days to minutes. AI now powers over 80% of social engineering activity (Abnormal Security 2025), and AI-powered phishing achieves 42% higher success rates than conventional campaigns.
What is BEC (Business Email Compromise)?
BEC is a sophisticated social engineering attack targeting organizations’ financial workflows. The attacker impersonates an executive, vendor, or trusted party via email or voice to authorize fraudulent wire transfers, redirect payroll, or steal sensitive data. BEC attack volume grew 103% in 2024. The average BEC attack costs $4.89 million. More than $6.3 billion was transferred through BEC in 2024 (FBI IC3). 89% of BEC attacks involve impersonating executives — typically the CEO — to invoke authority and urgency simultaneously. For full breach cost context, see our data breach statistics hub.
How can you tell if you’re being socially engineered?
Key red flags: unsolicited contact with urgent framing, requests that bypass normal verification processes, contact details provided by the contacting party rather than independently verified, unusual requests for secrecy, and information that’s “almost right” — slightly wrong details that suggest AI-generated or researched (but imperfect) pretexts. The single most reliable indicator is pressure to act before verifying. Legitimate requests, from real colleagues and real systems, can always withstand a brief verification call using independently sourced contact information.
What is MFA fatigue and how does it work?
MFA fatigue (also called prompt bombing) is an attack where an attacker with valid stolen credentials triggers repeated MFA push notifications to the target’s device, relying on the target to eventually approve a request out of confusion or frustration. Prompt bombing represented 14% of social engineering incidents in 2024 and succeeded in over 20% of public sector attacks in 2025. The defense is number matching (requiring the target to match a displayed number to approve the request, forcing deliberate engagement) or phishing-resistant FIDO2/WebAuthn MFA that doesn’t use push notifications.
Is social engineering only a corporate threat?
No. Social engineering targets individuals as readily as organizations. IRS impersonation scams, fake tech support calls, romance fraud, and grandparent scams are all social engineering against individuals. The US lost $16.6 billion to social engineering in 2024 (FBI IC3) across both corporate and individual targets. For personal protection, the same principles apply: verify independently, don’t let urgency override verification, and use identity theft protection tools to monitor for exposure of your personal data. For guidance on protecting your personal information, see our identity protection guide.
What is the best defense against social engineering?
No single control stops social engineering. The effective defense is layered: phishing-resistant MFA (FIDO2/WebAuthn) to protect credentials even after phishing success; out-of-band verification protocols for high-risk actions; targeted simulation training for high-risk roles (the 8% identified by Verizon); a reporting culture that encourages near-miss disclosure; and process architecture that inserts mandatory verification checkpoints before any financial or access change. Technical defenses — email filters, endpoint protection, network monitoring — are essential but insufficient. The attack goes around technical controls by targeting the human who controls them.
Marcus Chen covers cybersecurity, privacy, and social engineering threats at Axis Intelligence. He has tracked threat actor TTPs across enterprise security contexts with a focus on making technical concepts actionable for security teams and individuals.

Marcus Chen is the Cybersecurity & Privacy Editor at Axis Intelligence. With over 12 years of experience in enterprise security, he holds CISSP and CISM certifications and previously served as a SOC analyst at a Fortune 500 financial institution. Marcus personally tests every VPN, antivirus, and security tool he reviews, running them through standardized threat simulations in his home lab. He covers cybersecurity tools, VPN reviews, privacy guides, scam analysis, and enterprise security frameworks.
Voice: Technical but accessible. Speaks like a security analyst explaining things to a non-technical colleague. Uses concrete analogies. Never hypes, always measures risk.
