Social Engineering Statistics 2026
Published: May 2026 | Annual Threat Report
Methodology: This report synthesizes data from 14 primary sources covering full-year 2024 and full-year 2025 telemetry. Primary sources include: Mandiant M-Trends 2026 (500,000+ incident response hours), FBI Internet Crime Complaint Center Annual Report 2025 (released April 2026), Verizon Data Breach Investigations Report 2025, IBM Cost of a Data Breach Report 2025, CrowdStrike Global Threat Report 2025, ENISA Threat Landscape 2025, APWG Phishing Activity Trends 2025, KnowBe4 Phishing Industry Benchmarks 2025, Pindrop Voice Intelligence Report 2025, Sumsub Identity Fraud Report 2024, iProov Threat Intelligence Report 2025, Keepnet Labs research, Abnormal Security 2025, and named corporate incident records. All statistics are cited with their primary source. This report was produced without vendor sponsorship.
Executive Summary: 10 Findings That Define Social Engineering in 2026
- The 22-Second Threshold. In 2022, the median time between a social engineering initial access event and handoff to a secondary threat group was more than 8 hours. By 2025, it collapsed to 22 seconds. Social engineering is no longer an attack — it is the ignition key of a fully industrialized attack chain.
- The Phishing Inversion is complete. Email phishing, dominant for two decades, fell to 6% of initial infection vectors in 2025 (Mandiant M-Trends 2026). Voice phishing climbed to 11% overall and 23% in cloud-related compromises. The vectors have inverted.
- Total cybercrime losses exceeded $20 billion for the first time. The FBI IC3’s 2025 Annual Report recorded $20.9 billion in losses — a 26% increase from 2024’s record $16.6 billion, which itself rose 33% from 2023.
- The FBI formally designated AI crime for the first time in its history. The IC3 2025 report introduced “AI-related” as a formal crime descriptor, logging 22,000+ complaints and nearly $900 million in attributed losses in its first year — a figure universally regarded as a significant undercount.
- AI-enabled fraud surged 1,210% in 2025, while traditional fraud grew 195% — confirming AI is not incrementally improving existing attacks, it is structurally reshaping the threat category (Vectra AI, March 2026).
- But AI has not yet caused most breaches. Mandiant’s frontline assessment from 500,000 hours of incident response explicitly states: “We do not consider 2025 to be the year where breaches were the direct result of AI.” Most successful intrusions still stem from fundamental human and systemic failures. This corrects widespread media overstatement.
- Pretexting has overtaken phishing as the dominant attack type — now representing 50%+ of all social engineering incidents. This is the first time in recorded DBIR history that a non-phishing technique leads the category.
- Deepfake fraud attempts in contact centers surged 1,300% year-over-year. Human accuracy in detecting modern deepfakes: 0.1% (iProov). The human verification fallback has failed.
- The attack handoff economy is now sub-minute. Initial access partners pre-stage the secondary group’s preferred malware before even completing the social engineering. Detection must operate in seconds, not hours.
- Defense is working — but unevenly. Organizations with security awareness training see 86% reduction in phishing click rates within 12 months (KnowBe4). The gap between organizations that have invested in human-layer defense and those that have not is the defining risk differential of 2026.
The 22-Second Threshold: Why This Number Changes Everything
For years, the cybersecurity industry treated social engineering as an attack unto itself: a threat actor persuades a human, gains access, and then slowly navigates the environment. The implied timeline — hours or days between the social engineering success and the downstream harm — gave defenders a window.
That window has closed.
Mandiant’s M-Trends 2026, the most comprehensive annual frontline report in the industry based on more than 500,000 hours of incident response investigations, documents the collapse of the initial access handoff timeline:
| Year | Median Handoff Time (Initial Access → Secondary Group) |
|---|---|
| 2022 | > 8 hours |
| 2023 | ~4 hours (estimated from trend) |
| 2024 | ~45 minutes (estimated from trend) |
| 2025 | 22 seconds |
The implication is structural. A successful vishing call to your IT helpdesk — an attacker convincing a technician to reset MFA on a compromised account — no longer precedes a human-paced intrusion. It precedes an automated, pre-staged attack chain that begins executing in less than half a minute. By the time the technician might wonder if the call was legitimate, the attacker’s access has been packaged and sold to a ransomware group that has already begun deploying within the environment.
According to Axis Intelligence’s analysis, the 22-second threshold represents the completion of a structural transformation: social engineering has become the input layer of a sophisticated attack supply chain, not a standalone attack category. Defending against social engineering in 2026 requires understanding that the social engineering success is not the event — it is the trigger.
The downstream consequences of this velocity shift:
- Post-compromise detection windows have effectively closed for initial access partners. By the time security operations detects the anomaly, the access has already been transferred to a separate threat cluster that looks like a different actor
- Incident response must assume that any social engineering success immediately results in a fully enabled secondary actor with pre-staged tooling
- The 14-day median global dwell time reported by Mandiant — up from 11 days — reflects not faster attackers but longer-persisting espionage actors, while financially motivated attackers have compressed their timelines to near-instantaneous
The Social Engineering Financial Impact Tracker 2020–2026
According to Axis Intelligence’s cross-reference of FBI IC3 annual reports, FTC Consumer Sentinel Network data, and IBM breach cost research, the financial trajectory of social engineering losses over six years defines a threat category in structural acceleration:
| Year | Total US Cybercrime Losses (FBI IC3) | YoY Change | Social Engineering Share (Est.) | SE Losses (Est.) | Defining Event |
|---|---|---|---|---|---|
| 2020 | $4.2B | — | ~65% | ~$2.7B | COVID-19 drives phishing surge; BEC $1.87B |
| 2021 | $6.9B | +64% | ~68% | ~$4.7B | Remote work phishing peaks; BEC $2.4B |
| 2022 | $10.3B | +49% | ~70% | ~$7.2B | BEC, pig butchering investment fraud surge |
| 2023 | $12.5B | +22% | ~72% | ~$9.0B | Pretexting overtake begins; vishing emerges |
| 2024 | $16.6B | +33% | ~75% | ~$12.4B | Vishing +442%; BEC $2.77B; Arup $25.6M |
| 2025 | $20.9B | +26% | ~76% | ~$15.9B | AI crime formal category; voice #2 vector |
| 2027 (projected) | $35B+ | — | ~80% | ~$28B+ | $40B deepfake fraud alone projected (Javelin/Deloitte) |
Source: Axis Intelligence Social Engineering Financial Impact Tracker 2026. FBI IC3 total figures are verified. Social engineering share and SE-attributed loss figures are Axis Intelligence estimates based on IC3 crime category breakdowns (BEC, phishing, tech support fraud, confidence fraud, government impersonation), FTC Consumer Sentinel data, and IBM Cost of a Data Breach research. 2027 projection uses the IC3’s own 26-28% annual growth trend combined with Javelin Strategy & Research’s deepfake fraud projection.
The number that requires special attention: The FBI IC3’s 2025 Annual Report represents the first time in the agency’s 26-year history that the Bureau formally designated “AI-related” as a crime descriptor. This is not a statistical finding — it is an institutional recognition that AI has crossed from tool enhancement to crime category. The initial $900 million in AI-attributed losses is universally regarded by industry analysts as a significant undercount; the IC3 can only count complaints that were filed, identified as AI-related, and attributed to a specific crime type.
The Social Engineering Industrialization Index (SEII) 2026
The central analytical finding of this report is that social engineering in 2026 is not a more dangerous version of social engineering in 2020. It is a structurally different phenomenon.
According to Axis Intelligence’s original framework, the Social Engineering Industrialization Index (SEII) measures the degree to which social engineering has transitioned from opportunistic, skill-intensive human manipulation to an industrialized, automated, supply-chain-structured attack category. It measures five dimensions:
| Dimension | 2020 State | 2026 State | Change |
|---|---|---|---|
| Handoff velocity | Hours (attacker pauses between phases) | 22 seconds (automated pre-staging) | -99.9% |
| PhaaS entry cost | $5,000+ (custom tooling) | $200/month (SheByte, AI-generated templates) | -96% |
| AI integration depth | <1% of attacks (AI-enhanced) | 80%+ of social engineering activity (ENISA 2025) | +7,900% |
| Voice attack barrier | Skilled impersonator required | 3-second audio sample → 85% accuracy clone (McAfee) | Eliminated |
| Email quality floor | Detectable via grammar and spelling | Indistinguishable from legitimate (AI-generated) | Eliminated |
According to Axis Intelligence, the SEII represents a qualitative transition point: when all five dimensions cross the threshold simultaneously, social engineering stops scaling linearly and begins scaling exponentially. The convergence happened in 2024-2025. The 1,210% surge in AI-enabled fraud in 2025 (Vectra AI) is the first year of exponential scaling — not a new baseline for linear growth.
The industrialization framework reveals three distinct attack supply chain roles that now operate with professional separation:
Initial Access Partners (IAPs): Specialize in the social engineering execution — vishing calls, phishing campaigns, pretexting. Their deliverable is a valid authenticated session or a set of credentials. They sell this access, not the downstream harm.
Secondary Groups: Purchase access from IAPs and execute high-impact operations — ransomware deployment, data exfiltration, BEC wire fraud. They never interact with the victim via social engineering; they receive a clean session and begin operating immediately.
PhaaS/VaaS Platforms: Provide the tooling infrastructure — AI-generated lure templates (SheByte, $200/month), voice cloning APIs, deepfake video generation platforms. They enable IAPs who lack technical skills to operate at professional quality.
The social engineer of 2020 was a skilled individual who researched a target, crafted a convincing lure, and executed a deceptive interaction. The social engineering ecosystem of 2026 has distributed those tasks across three specialized roles connected by an automated marketplace.
The AI-Augmented Kill Chain: Where AI Entered Social Engineering
This is not a prediction. These are documented operational capabilities, each with confirmed use in commercial-grade attack tooling:
| Kill Chain Stage | Pre-AI Method | AI-Augmented Method | Documented Impact |
|---|---|---|---|
| Reconnaissance | Manual OSINT (days to weeks) | AI-aggregated LinkedIn, WHOIS, job posting analysis (minutes) | Target research time: ~95% reduction |
| Pretext development | Skilled human writer; weeks for complex pretext | LLM-generated backstory with organizational context | Quality equals skilled human; time compressed to hours |
| Lure creation (email) | Human-crafted, grammar errors common | AI-generated, grammatically perfect, personalized at scale | 4x higher click-through rate (iProov) |
| Voice execution | Human impersonator with training | AI voice clone from 3 seconds of audio, 85% accuracy match | Used in $25.6M Arup video conference fraud |
| Video execution | Not previously viable for most actors | Real-time deepfake video (<$100/month tooling access) | +2,665% Native Virtual Camera attacks (iProov 2025) |
| Handoff | Manual access transfer (hours) | Pre-staged automated handoff (22 seconds, M-Trends 2026) | Detection window: effectively closed |
The critical Mandiant corrective: Despite this documented AI integration, Mandiant’s M-Trends 2026 — the most authoritative frontline IR report — explicitly states that “we do not consider 2025 to be the year where breaches were the direct result of AI.” Most successful intrusions still stem from fundamental human and systemic failures. AI is a force multiplier on existing attack categories, not yet the autonomous breach generator that media coverage implies.
This is an important distinction for security investment: the defense priority is human-layer resilience and process architecture, not AI-versus-AI countermeasures. The organizations that defend successfully against social engineering in 2026 are those that have built verification processes that don’t depend on humans detecting sophisticated deception — not those that have deployed AI detection tools.
The Phishing Inversion of 2025: A Pattern That Took a Decade to Emerge
Axis Intelligence designates the 2023-2025 period as “The Phishing Inversion” — the structural completion of a transition that security researchers have watched building since 2020 but that most published guidance has failed to reflect.
The data, drawn from Mandiant M-Trends 2026 and FBI IC3 annual reports:
| Attack Vector | 2020 Share of Initial Access | 2023 Share | 2025 Share | Trend |
|---|---|---|---|---|
| Email phishing | ~25-30% (dominant) | ~15% (declining) | 6% (marginal) | ↓↓↓ |
| Exploit of internet-facing systems | ~20% | ~30% | 32% (dominant) | ↑↑ |
| Voice phishing | <2% | ~5% | 11% overall; 23% cloud | ↑↑↑ |
| Prior compromise / stolen credentials | ~15% | ~12% | 10% | ↓ |
| Web compromise | ~8% | ~8% | 8% | → |
| Insider threat | ~5% | ~6% | 6% | → |
| Third-party compromise | ~7% | ~8% | 5% | → |
Source: Axis Intelligence cross-reference of Mandiant M-Trends 2026 initial infection vector data. Email phishing 2020 estimates based on prior DBIR and M-Trends reporting; 2023 and 2025 figures are primary-source verified.
The Phishing Inversion is not simply “email phishing got worse.” It reflects a fundamental attacker rational adaptation:
Why email declined: Email security controls improved dramatically. DMARC/DKIM/SPF adoption, AI-powered email gateways, and security awareness training collectively created a filtration layer that made mass email phishing less cost-effective. The Verizon Data Breach Investigations Report 2025 confirms this: the human element remains in 60% of breaches, but the delivery mechanism has shifted. Attackers moved up the sophistication curve.
Why voice rose: Voice calls bypass every email security control. There is no spam filter for a phone call. There is no DMARC equivalent for voice. The technological controls that made email phishing measurably harder don’t exist for voice channels. Attackers found the gap.
Why voice is accelerating in cloud specifically (23%): Cloud-related compromises disproportionately target SaaS environments where identity is the primary control. Compromising a SaaS account requires only an MFA reset — which helpdesk staff are trained to facilitate. The combination of cloud-centric attack targets and voice as the optimal vector for bypassing MFA has created a specialization in voice-against-cloud attacks that the Scattered Spider/UNC3944 group has documented extensively.
According to Axis Intelligence, the Phishing Inversion means that organizations should rebalance security awareness training immediately: the majority of training content in most enterprise programs still focuses on email-based attacks. The attack surface has fundamentally shifted. Voice-specific training — simulated vishing calls, helpdesk verification drills, MFA reset authorization procedures — is now the higher-priority defense investment.
Sector Targeting in 2026: Where Attacks Concentrate
Mandiant’s M-Trends 2026 provides the most authoritative sector breakdown based on 500,000+ hours of real incident response, not survey self-reporting:
| Sector | Share of Mandiant Investigations (2025) | Primary Attack Vector | Change vs. 2024 |
|---|---|---|---|
| High Technology | 17.0% | Voice phishing, OAuth token harvesting | ▲ Up (overtook Financial) |
| Financial Services | 14.6% | BEC, credential theft, insider | ▼ Down from #1 spot |
| Business & Professional Services | 13.3% | Spear phishing, supply chain compromise | → Stable |
| Healthcare | 11.9% | Phishing, ransomware via social | ↑ Increasing |
| Retail & Hospitality | 7.3% | BEC, smishing, customer fraud | → Stable |
| Government | 5.8% | Insider threat, spear phishing | → Stable |
| Education | 4.6% | Phishing, pretexting | → Stable |
| Telecommunications | 4.6% | SIM swap, vishing, supply chain | → Stable |
| Construction & Engineering | 4.1% | BEC, invoice fraud | → Stable |
| Entertainment & Media | 4.1% | Executive impersonation, BEC | → Stable |
Source: Mandiant M-Trends 2026. Industry share based on investigations where sector could be identified. Excludes “other” category.
The high-tech sector’s rise to #1 is the most significant targeting shift in 2025. This reflects the operational strategy of groups like ShinyHunters and Scattered Spider (UNC3944): cloud-native SaaS companies have high-value data (customer records, authentication tokens, API keys) accessible with a single compromised identity. The attack is economically efficient — one successful vishing call to one IT helpdesk unlocks a SaaS environment that may contain millions of customer records from dozens of downstream organizations.
The ShinyHunters operation documented in M-Trends 2026 illustrates the cascade: a vishing call to a third-party SaaS vendor’s IT helpdesk → compromised credentials → harvested OAuth tokens and session cookies → pivot into the vendor’s downstream customers → data theft across multiple enterprises from a single social engineering starting point. IBM’s 2025 X-Force Threat Intelligence Index independently found that third-party breaches doubled year-over-year to 30% of all incidents, corroborating this supply chain pattern from a separate dataset. Supply chain social engineering has arrived at scale.
The FBI’s AI Crime Recognition: What It Means and What It Doesn’t
The formal introduction of “AI-related” as a crime descriptor in the FBI IC3’s 2025 Annual Report is not a statistical finding. It is a definitional milestone.
For 26 years, the IC3 reported cybercrime by crime type — phishing, BEC, ransomware, tech support fraud — without distinguishing whether the attack was AI-assisted. The 2025 report changes this. It logged:
- 22,000+ AI-related complaints
- $900 million in AI-attributed losses
- Breakdown: investment fraud ($632M), BEC ($30M), tech support scams ($19.5M)
These figures almost certainly represent a fraction of the true scale. The IC3 can only count:
- Complaints that were filed (vast underreporting is documented across all cybercrime categories)
- Complaints that victims correctly identified as AI-related (most victims cannot determine whether voice cloning or AI-generated content was involved)
- Complaints that were formally attributed to this new crime descriptor
The $900 million figure is not a ceiling — it is a floor on an unknown room.
According to Axis Intelligence, the policy significance of this first formal AI crime designation is greater than the dollar figure. Law enforcement, regulatory agencies, insurance underwriters, and corporate governance boards will now begin using “AI-related cybercrime” as a formal risk category. This creates pressure for AI-enabled crime to be:
- Reported separately in cyber insurance filings
- Classified separately in incident response reporting
- Addressed specifically in regulatory compliance frameworks
The compliance and reporting architecture around AI-enabled social engineering will be built from this 2025 designation outward.
The Deepfake Threat: Quantified
Deepfake social engineering crossed from theoretical to operational in 2024-2025. The data from primary sources:
Volume:
- Contact center deepfake fraud attempts: +1,300% year-over-year (Pindrop 2025 Voice Intelligence Report) — from approximately 1 attempt per month to 7 per day
- Deepfake detection attempts on identity verification systems: +2,665% for Native Virtual Camera attacks (iProov 2025 Threat Intelligence)
- Face-swap attacks on identity verification: +300% (iProov 2025)
- Overall deepfake fraud detection: 4x increase from 2023 to 2024, now 7% of all Sumsub-tracked fraud attempts
- Deepfake vishing in Q1 2025: +1,633% vs. Q4 2024 (Keepnet Labs, March 2026)
Human detection capability:
- Probability that a human correctly identifies a modern deepfake: 0.1% (iProov, 2025 study)
- This figure means the human fallback — “we’ll verify by video call” — has failed as a verification mechanism for any organization that does not deploy liveness detection technology
Financial scale:
- Arup case: $25.6 million (video conference deepfake of CFO, February 2024)
- Average business deepfake loss (2024): $450,000 per affected organization (Regula)
- Projected total deepfake fraud losses globally by 2027: $40 billion (Javelin Strategy & Research, Deloitte)
- FBI IC3 fake job interview deepfake losses (2025): $13 million
The three-second threshold: Current voice cloning tools require as little as three seconds of source audio to produce an 85% accuracy voice match (McAfee research). Source audio is available from earnings calls, YouTube interviews, LinkedIn posts, company website “meet the team” videos, and public conference recordings for most senior executives at any organization with a public profile.
According to Axis Intelligence, the appropriate organizational response to the deepfake threat is not technology detection — it is process architecture. The organizations that avoided deepfake losses in documented 2025 cases (Ferrari executives, WPP executives) did so because of pre-established verification protocols — a specific callback procedure, a codeword system, or a required secondary authorization — not because they correctly detected the deepfake. The process was the defense, not the technology.
What the Numbers Don’t Show: The Under-Reporting Problem
Every statistic in this report represents the floor of the true threat, not the ceiling.
The FBI IC3 is the United States’ authoritative cybercrime loss database. In 2025, it received 1,048,000 complaints — 3,000 per day, a record. Yet every major analysis of cybercrime under-reporting concludes that the IC3 captures a fraction of actual incidents:
- Victim embarrassment, particularly in BEC cases where an employee authorized a fraudulent transfer
- Corporate liability management — organizations frequently avoid reporting to prevent regulatory scrutiny or reputational damage
- Difficulty in determining that a social engineering attack occurred (many are detected only through downstream forensics, long after the original incident)
- International victims who do not have access to or awareness of US reporting mechanisms
The total $20.9 billion loss figure from IC3 2025 — already a record — is a documented undercount. Industry estimates typically place the true US cybercrime loss figure at 3–5x the IC3 reported number, suggesting a true 2025 US social engineering impact of $47–$80 billion.
At the global level, the Cybersecurity Ventures projection of $10.5 trillion in annual global cybercrime costs by 2025 (often cited as a 2021 forecast now arriving) frames the scale that no single national report captures.
The Defense Posture: What Is Working and What Is Not
What Is Working
Phishing-resistant MFA. The combination of FIDO2/WebAuthn hardware keys and passkeys provides the only authentication mechanism that cannot be bypassed by social engineering short of physical theft. CISA’s guidance on phishing-resistant MFA reflects this: standard TOTP codes and push notifications are vulnerable to AiTM attacks and prompt bombing; hardware-bound credentials are not. ENISA’s Threat Landscape 2025 corroborates that AI-supported phishing accounted for more than 80% of observed social engineering activity by early 2025, making phishing-resistant authentication the single highest-priority control update for 2026.
Security awareness training — but only the right kind. KnowBe4’s 2025 phishing benchmarks document an 86% reduction in phishing click rates within 12 months for organizations running consistent simulation training. The caveat: this training remains predominantly email-focused in most enterprise programs. Given that voice phishing is now the #2 initial infection vector, organizations are training against a threat that has become less prevalent while the threat they don’t train against (vishing) rises.
Out-of-band verification protocols. Organizations with mandatory callback verification — requiring any credential, access, or financial transaction request to be verified through a separate, independently sourced communication channel — have documented resistance to vishing and BEC. The Ferrari case and WPP case, where senior executives were targeted by deepfake voice calls, resulted in no losses because the targets followed verification procedures that required secondary confirmation. Process beats technology when the technology can be faked.
52% of organizations now detect compromise internally (up from 43% in 2024, per M-Trends 2026). This improvement reflects better security monitoring, improved SIEM coverage, and greater behavioral detection capability. The cybersecurity professionals driving this improvement follow career paths that increasingly include human-layer threat expertise — see our cybersecurity career guide 2026 for the skills profile of analysts working on social engineering defense.
What Is Not Working
SMS/TOTP-based MFA. AiTM (Adversary-in-the-Middle) proxying tools — Tycoon2FA, EvilProxy, Modlishka — can steal session cookies from TOTP-authenticated sessions in real time, rendering traditional MFA ineffective against social engineering-initiated phishing. Microsoft reported that Tycoon2FA alone accounted for 62% of the phishing attacks targeting Microsoft customers that bypassed MFA. Any organization relying on SMS or TOTP codes as its primary MFA layer has a visible gap.
Email-only security awareness programs. The Phishing Inversion documented in this report makes email-only training increasingly misaligned with the actual threat distribution. Vishing simulations, helpdesk social engineering drills, and deepfake awareness training are underrepresented in most enterprise security awareness programs relative to their current threat share.
Point-in-time security training. Annual compliance training does not build the reflexive skepticism that stops social engineering in real-time. Verizon DBIR 2025 found that 8% of employees account for 80% of incidents — meaning high-risk role targeting and repeated simulation for vulnerable role types outperforms broad annual training.
Three Scenarios for Social Engineering 2027–2028
These are analytical scenarios developed by Axis Intelligence from documented AI capability trajectories, attacker economic incentive analysis, and current defense posture trends. They are not predictions — they are structured possibilities that organizations should stress-test against.
Scenario 1: The Agentic Threshold (Probability: High)
AI agents — systems capable of executing multi-step tasks autonomously — enter commercial social engineering operations at scale. The initial form: automated spear-phishing campaigns that conduct OSINT reconnaissance, generate personalized lures, send at optimal timing, and respond to target replies without human attacker involvement. This already exists in experimental form. By 2027, commercial-grade agentic social engineering platforms emerge in the PhaaS marketplace.
Defense implication: Detection of social engineering moves from human-behavior analysis to AI-traffic analysis. Email security systems must detect AI-generated patterns in addition to malicious content. The grammar and style signals that human-crafted phishing displayed are gone; new signals (timing patterns, language model artifacts, behavioral sequences) become the detection surface.
Financial implication: If agentic social engineering removes the human labor cost from phishing campaigns while maintaining the quality improvement AI already provides, the cost-to-attack ratio drops by another order of magnitude. The democratization of social engineering reaches its logical endpoint: any threat actor with a cloud account and $50/month can run a professional-grade spear-phishing operation.
Scenario 2: The AI Agent Attack Surface (Probability: Medium)
Enterprise AI agents — customer service bots, internal knowledge assistants, procurement automation — become social engineering targets as they expand. Attackers discover that AI agents can be manipulated with text-based prompts (prompt injection) to take actions that humans would refuse. A threat actor who successfully social-engineers an enterprise AI agent gets the same outcome as social-engineering a human employee — but without the human detection capability.
Mandiant documented early indicators of this in 2025: the QUIETVAULT credential stealer checked compromised machines for AI CLI tools and executed prompts to search for configuration files. This is attacker-side AI exploitation; the 2027 variant targets customer-facing AI agents.
Defense implication: AI agent deployment requires the same social engineering awareness that human employee deployment requires — plus a new category of AI-specific controls (prompt injection protection, input validation, action authorization chains). Organizations deploying AI agents for customer service, internal IT support, or financial operations in 2026 are deploying an attack surface that their current security awareness programs don’t address.
Scenario 3: The Regulatory Response (Probability: High)
Regulatory bodies in the US and EU respond to the AI fraud recognition with mandatory reporting requirements, AI crime labeling, and enterprise AI governance standards that include social engineering risk management. The EU AI Act’s synthetic media provisions, the Take It Down Act’s deepfake criminalization, and the FBI’s first AI crime descriptor all point toward an accelerating regulatory response.
By 2027-2028, enterprises in regulated industries (financial services, healthcare, critical infrastructure) face specific AI-enabled fraud risk reporting requirements in addition to existing breach notification obligations. Insurance underwriters build AI social engineering coverage as a distinct cyber policy endorsement.
Enterprise implication: The compliance cost of AI-enabled social engineering risk will become a budget line item alongside ransomware coverage. Organizations without documented AI social engineering controls will face premium increases and potential coverage exclusions. The 2025 FBI AI crime designation is the regulatory starting gun.
Frequently Asked Questions
What is the state of social engineering in 2026?
Social engineering in 2026 is structurally different from prior years — not more dangerous on a linear scale, but qualitatively transformed. The key finding from Axis Intelligence’s analysis of 14 primary sources: social engineering has industrialized. The median time between a social engineering initial access event and handoff to a secondary attacker group collapsed from 8 hours in 2022 to 22 seconds in 2025 (Mandiant M-Trends 2026). Cybercrime losses exceeded $20.9 billion for the first time (FBI IC3 2025). Voice phishing rose to the #2 initial infection vector while email phishing fell to 6%. AI-enabled fraud surged 1,210% in 2025. For a foundational explanation of social engineering tactics and techniques, see our complete social engineering guide.
Has AI taken over social engineering attacks?
AI has dramatically amplified social engineering — but the primary sources correct the hype. Mandiant’s M-Trends 2026, based on 500,000+ hours of frontline incident response, explicitly states that “we do not consider 2025 to be the year where breaches were the direct result of AI.” Most successful intrusions still stem from human and systemic failures. What AI has done: eliminated the quality gap between sophisticated and mass-market attacks (4x higher click rates, voice cloning from 3 seconds of audio, indistinguishable deepfake video). AI is a force multiplier, not yet an autonomous threat.
Why did cybercrime losses hit a record $20.9 billion in 2025?
The FBI IC3’s 2025 Annual Report documented $20.9 billion in total US cybercrime losses, up 26% from 2024’s record $16.6 billion. The primary drivers: AI-enabled scaling of existing social engineering attack types (BEC, investment fraud, tech support scams), the rise of voice phishing as the #2 initial infection vector, and the industrialization of the attack supply chain that compressed the cost-to-attack ratio while AI simultaneously improved attack quality. The $20.9 billion is also a documented undercount — IC3 captures a fraction of total cybercrime losses based on victim reporting rates.
What is the most dangerous social engineering attack in 2026?
Based on financial impact and detection difficulty, BEC (Business Email Compromise) combined with voice phishing represents the highest-consequence attack category. BEC alone cost $2.77 billion in FBI-reported US losses in 2024. The combination of voice phishing to establish trust, BEC email to initiate the fraudulent wire, and AI-generated content to pass quality filters creates a multi-vector attack that bypasses most point-in-time defenses. For detail on breach costs from social engineering attacks, see our data breach statistics hub.
What is vishing and why is it rising so dramatically?
Vishing (voice phishing) is social engineering conducted over telephone calls. It rose to 11% of all initial infection vectors in 2025 and 23% in cloud-related compromises (Mandiant M-Trends 2026). It surged 442% from H1 to H2 2024 (CrowdStrike). It is rising because: it bypasses all email security controls, it activates stronger authority compliance and urgency responses through real-time voice interaction, it can be amplified with AI voice cloning, and it specifically targets helpdesk workflows designed for speed rather than security verification. See our phishing statistics hub for related data.
What is the Phishing Inversion?
Axis Intelligence designates the 2023-2025 period as “The Phishing Inversion” — the completion of a structural transition in which email phishing, dominant for two decades, fell to 6% of initial infection vectors (Mandiant M-Trends 2026) while voice phishing rose to 11% overall and 23% in cloud-related compromises. The inversion reflects rational attacker adaptation: email security controls improved dramatically while voice channels remained unprotected. Organizations whose security awareness training is still email-centric are training against the wrong threat distribution.
What does the 22-second handoff time mean for defense?
The 22-second median time between initial social engineering access and handoff to a secondary threat group (Mandiant M-Trends 2026, down from 8+ hours in 2022) means that post-compromise detection of the social engineering event has limited utility. By the time a security operations team detects the anomalous access, the access has already been transferred to a ransomware group or espionage actor that appears as a separate threat actor. Defense priority must shift from detection-after-social-engineering to prevention-before-social-engineering: phishing-resistant MFA, out-of-band verification protocols, and behavioral controls that stop the initial access from succeeding.
What sectors are most targeted by social engineering in 2026?
Based on Mandiant M-Trends 2026 data: high technology (17% of investigations, up to #1 from #2) leads, followed by financial services (14.6%), business and professional services (13.3%), and healthcare (11.9%). High technology’s rise reflects the specific targeting strategy of groups like Scattered Spider/UNC3944, who exploit cloud SaaS environments via vishing against IT helpdesks. Healthcare’s continued high targeting reflects the combination of high-value patient data, compliance-driven urgency culture, and historically lower security investment.
What is the best defense against social engineering in 2026?
The primary defenses that primary-source data confirms are effective: phishing-resistant FIDO2/WebAuthn MFA (eliminates credential theft as a viable attack outcome); mandatory out-of-band verification protocols for all credential, access, and financial transaction changes (stops vishing and BEC even when deepfake voice is used); vishing-specific security awareness training and simulation (addresses the threat that now accounts for more initial access than email phishing in cloud environments); and the 8% targeting principle — concentrated simulation training for high-risk roles (helpdesk, finance, executive assistants, new hires). For protecting personal accounts connected to enterprise environments, a quality password manager and phishing-resistant MFA are the highest-value individual defenses.
How does ransomware connect to social engineering?
Ransomware and social engineering are now structurally integrated through the initial access partner ecosystem. Social engineering provides the initial access; ransomware is the downstream payload. Mandiant M-Trends 2026 documents that ransomware-related intrusions accounted for 13% of investigations in 2025, with operators systematically targeting backup infrastructure, identity services, and virtualization management to deny recovery — not just encrypting data. The 22-second handoff means that a social engineering success today initiates a ransomware deployment within seconds. For context on ransomware’s financial scale, see our ransomware statistics hub.
Methodology and Data Sources
This report was produced by the Axis Intelligence Research Desk using the following primary sources. All statistics are cited inline with their source. No statistics were fabricated, interpolated without disclosure, or sourced exclusively from vendor marketing materials.
Primary sources (full-year 2024 and 2025 telemetry):
- Mandiant M-Trends 2026 (Google Cloud/Mandiant, March 2026) — 500,000+ incident response hours
- FBI Internet Crime Complaint Center Annual Report 2025 (April 2026)
- FBI Internet Crime Complaint Center Annual Report 2024 (April 2025)
- Verizon Data Breach Investigations Report 2025
- IBM Cost of a Data Breach Report 2025
- CrowdStrike Global Threat Report 2025
- ENISA Threat Landscape 2025 (European Union Agency for Cybersecurity)
- APWG Phishing Activity Trends Reports Q2–Q4 2025
- KnowBe4 Phishing Industry Benchmarks 2025
- Pindrop Voice Intelligence and Security Report 2025
- iProov Threat Intelligence Report 2025
- Sumsub Identity Fraud Report 2024
- Keepnet Labs research (March 2026)
- Abnormal Security 2025 Email Threat Report
- Vectra AI analysis (March 2026)
- Javelin Strategy & Research identity fraud projections
- Named corporate incident records: Arup ($25.6M, February 2024), WPP (2025), Ferrari (2025)
Financial Impact Tracker methodology: Total FBI IC3 figures are directly from annual reports. Social engineering share estimates are Axis Intelligence calculations based on IC3 crime category breakdowns (BEC, phishing, government impersonation, tech support fraud, confidence/romance fraud) applied to total loss figures, cross-referenced against IBM Cost of a Data Breach Report attribution data. The 2025 SE losses figure uses a 76% estimated share of the $20.9B total, consistent with the share of IC3-categorized crimes that are social engineering-based. 2027 projections apply the documented 26-28% annual growth trend with adjustment for AI amplification.
This report contains no affiliate links. No vendors provided financial support for this research. The Axis Intelligence Research Desk operates editorially independent of all commercial relationships.
