How AI Phishing Targets US Banks, Fintechs and Insurers in 2026

Phishing is the leading cause of data breaches in US financial services — and AI has made it fundamentally harder to stop. 76% of US organisations experienced attempted or actual payments fraud in 2025, according to the 2026 AFP Payments Fraud and Control Survey published this week. Business email compromise affected 74% of organisations, a significant increase from both 2023 and 2024. Yet only 17% of those organisations are using AI to fight back.

The gap between how fast attackers are moving and how slowly defences are adapting is where US financial institutions are being compromised. This guide covers how AI-powered phishing works in 2026, what US banks and fintechs are facing, the regulatory expectations now in play, and the specific controls that reduce exposure.

For broader context on how cybercrime is targeting the US financial sector, read our guide to banking cybercrime.

What makes AI phishing different from traditional phishing

Traditional phishing relied on volume. Attackers sent millions of generic emails and waited for a small percentage to click. Security teams learned to filter them based on typos, suspicious domains, and unusual sender patterns. That playbook no longer works.

82.6% of phishing emails detected between September 2024 and February 2025 used AI — a 53.5% year-on-year increase. AI-generated phishing emails have a 60% higher click rate than traditionally crafted ones, according to a University of Oxford study. The “poorly written email equals phishing” filter is obsolete.

Here is what AI enables that wasn’t possible before:

  • Personalisation at scale. AI scrapes breached data, LinkedIn profiles, company websites and press releases to generate emails that reference real colleagues, real projects and real internal terminology. A finance team member at a US bank receives a message that appears to be from their CFO referencing a real upcoming wire transfer. Nothing about the email triggers a standard filter.
  • Voice cloning and deepfake audio. Attackers use AI to clone executive voices from earnings calls, media appearances and conference recordings. In documented 2025 incidents, US bank employees received calls from what sounded exactly like their CEO authorising urgent wire transfers. The FFIEC, OCC and CFPB have all flagged voice-based social engineering as a priority concern for US financial institutions.
  • Adaptive campaigns that learn from failure. Where human attackers take days to pivot after a blocked attempt, AI-driven campaigns adjust in hours. A US financial technology firm blocked an initial AI phishing wave in early 2025, only to face a modified version six hours later using entirely different linguistic patterns and sender addresses.
  • Callback phishing targeting financial brands specifically. Between October 2025 and January 2026, 27.1% of callback phishing campaigns impersonated financial services brands including PayPal, Venmo and Bank of America — the most targeted sector in the dataset.

Who is being targeted and how

Financial services account for 23.5% of all phishing attacks globally according to the APWG — the single most targeted sector — because stolen credentials from a bank employee or customer have significantly higher monetisation value than credentials from almost any other industry.

Within US financial institutions, five roles are disproportionately targeted:

Treasury and wire transfer teams are the highest-value target. A single successful AI phishing attack on a treasury function can redirect a wire transfer worth millions. The FBI reported BEC losses exceeding $2.9 billion in 2024, with US financial services firms absorbing the largest share.

Loan officers and mortgage teams are targeted for customer PII. A successful compromise gives attackers Social Security numbers, income data, and property details — enough to commit identity fraud at scale.

Executive assistants manage calendar access, travel bookings, email forwarding rules and vendor payments. A compromised executive assistant account gives attackers proximity to the C-suite without directly targeting it.

IT helpdesk staff are targeted through vishing and callback phishing. Attackers impersonate employees, request password resets, and use the access granted to escalate privileges within the network.

Customer-facing staff at retail banking branches and call centres are targeted for credential harvesting via fake internal portals — giving attackers access to customer account management systems.

The US regulatory picture in 2026

Regulators are no longer treating AI phishing incidents as user error. The framing has shifted to control failure — meaning the institution bears responsibility for not having adequate detection and response capability in place.

  • FFIEC guidance establishes that phishing-related account takeover and fraud incidents fall within the institution’s information security programme obligations. The FFIEC Cybersecurity Assessment Tool was retired in August 2025, but the underlying expectation — that institutions continuously assess and improve phishing controls — remains in force and is being examined under updated frameworks.
  • OCC and FDIC have both issued guidance making clear that AI-related cyber risks, including AI-enabled phishing and social engineering, must be addressed within existing model risk management and third-party risk frameworks. The December 2025 NIST Cyber AI Profile harmonises requirements from the Federal Reserve, OCC and FDIC into a unified cybersecurity framework for AI risk — setting the new baseline for how US banks demonstrate control maturity.
  • CFPB has warned that phishing-related account takeover events may fall under UDAAP — Unfair, Deceptive or Abusive Acts or Practices — if institutions lack adequate detection and consumer protection controls. This elevates a cybersecurity incident into a consumer protection compliance matter with significant penalty exposure.
  • The Computer-Security Incident Notification Rule mandates 36-hour incident reporting for significant cybersecurity events at US banks. A successful AI phishing campaign that compromises customer accounts or enables fraudulent transactions triggers this requirement — making detection speed a compliance issue, not just an operational one.

What a US financial services AI phishing attack looks like in practice

August 2025 — Marquis Software supply chain attack: A cyberattack on Marquis Software, a vendor providing data analytics and communication software to US financial institutions, affected at least 74 banks and credit unions across the United States. Regulatory filings in March 2026 confirmed the breach exposed the personal and financial data of between 672,000 and 1.35 million people, including Social Security numbers and financial account details. The attack vector began with credential compromise — the type of initial access AI-powered phishing excels at establishing.

Ongoing 2025/2026 — IRS and tax season targeting: Each year between January and April, US financial services customers are targeted with IRS-themed phishing campaigns. In 2025 and into 2026, these campaigns evolved to include AI-generated voice calls impersonating IRS agents, fake CPA firms sending personalised tax documents, and SMS phishing campaigns exploiting the high-trust period around tax deadlines. Financial services firms see a measurable spike in account takeover attempts during Q1 each year as a direct result.

Callback phishing against US banks: In documented campaigns throughout 2025, attackers impersonated PayPal, Bank of America and Venmo customer support, directing US consumers to call numbers staffed by AI-assisted human operators who guided them through credential and MFA token harvesting. The social engineering was sophisticated enough that generic employee training showed no measurable effect on click or call rates at a tracked US fintech firm with 12,511 employees.

Controls that work — and ones that don’t

What doesn’t work in 2026:

Annual phishing awareness training has been shown to have no statistically significant effect on click rates against AI-generated attacks. Legacy email filters built on signature-based detection miss AI-generated messages because they contain no malicious signatures. Single-factor authentication and SMS-based OTP are both routinely bypassed by AI-assisted social engineering.

What works:

Behavioural anomaly detection looks for deviations from normal employee behaviour patterns rather than matching known attack signatures. When a treasury team member who never initiates wire transfers suddenly authorises one, the system flags it regardless of whether the email that preceded it passed all technical filters.

Phishing-resistant MFA — specifically FIDO2 and hardware security keys — cannot be bypassed by credential harvesting because authentication is cryptographically bound to the legitimate domain. US financial regulators increasingly expect phishing-resistant MFA on all privileged accounts.

Continuous credential monitoring closes the window between when employee credentials are compromised — often via an AI phishing campaign — and when attackers use them. The gap between credential theft and account takeover is often days to weeks, giving security teams an intervention window if they have visibility into credential exposure in real time.

External threat intelligence gives financial services teams early warning of campaigns being coordinated against their brand on dark web forums and Telegram channels before they surface as live attacks. Knowing that a threat actor is preparing a campaign targeting your institution’s customers gives your SOC team time to update filters, alert customer service staff, and notify regulators if appropriate.

Early warning through dark web monitoring can give security teams days of lead time — credentials circulating on criminal markets are often the first signal that a phishing campaign has succeeded and account takeover is imminent.

A final word

AI-powered phishing is not an emerging threat for US financial services — it is the current operating environment. 76% of US firms faced payments fraud in 2025. Regulators at the FFIEC, OCC, FDIC and CFPB are treating control failures as institutional failures, not user errors. The financial sector breach average is $5.9 million per incident, not including regulatory follow-up.

The institutions managing this threat effectively share one capability — continuous visibility into external exposure before attacks reach employees. That means knowing when credentials are circulating on dark web markets, when threat actors are targeting your brand, and when your external attack surface has new vulnerabilities that AI-powered campaigns will find before your team does.

CybelAngel’s Credential Intelligence and Dark Web Monitoring give US financial services security teams continuous visibility into compromised credentials, active threat actor campaigns, and external exposure — alerting your team before attackers act on what they find.

FAQ

AI-powered phishing uses machine learning and generative AI to craft personalised, convincing phishing emails, voice calls and messages at scale — removing the typos and generic patterns that traditional filters detect.

Stolen credentials from US banks, fintechs and insurers have significantly higher monetisation value than most other sectors, enabling direct payment fraud, account takeover and identity theft at scale.

The FFIEC, OCC, FDIC and CFPB all have active guidance. Phishing-related incidents can trigger the Computer-Security Incident Notification Rule’s 36-hour reporting requirement and CFPB UDAAP exposure if consumer protection controls are inadequate.

Research tracking 12,511 employees at a US fintech firm found generic training had no statistically significant effect on click rates against AI-generated phishing. Behavioural detection and phishing-resistant MFA are more effective controls.


Financial sector breaches average $5.9 million in damage per incident, not including regulatory penalties or reputational loss.

About the author