
The 2024 Hong Kong incident, where attackers used a video deepfake of the CFO and several colleagues on a Microsoft Teams call to authorise a $25 million transfer from Arup’s Hong Kong office, was the wake-up call for UK board rooms. Until then, deepfake fraud had been a hypothetical risk; now it’s in the same threat tier as ransomware. UK incidents in 2024 and 2025 have followed: cloned-voice approvals of supplier-bank-detail changes, deepfaked Teams calls impersonating senior executives, and AI-generated phishing emails with perfect English, accurate context and personalised hooks scraped from LinkedIn.
This guide explains the AI-driven phishing and deepfake threats UK businesses face in 2026, why traditional defences don’t fully work, and the layered controls — technical, process and human — that do. The guide is written for business leaders making investment decisions rather than just IT teams.
How AI has changed the phishing threat
Three structural shifts since 2023:
1. Phishing emails are now grammatically perfect and contextually accurate
Pre-2023, the most common indicator of a phishing email was bad grammar, awkward phrasing, or generic salutation. Generative AI has made all three indicators obsolete. Modern phishing emails:
- Use perfect British English (or any other dialect requested).
- Reference the recipient’s actual job title, recent LinkedIn activity, and known projects.
- Mimic the writing style of named colleagues based on public posts and previous email threads.
- Adapt tone to the recipient’s seniority — formal for partners, breezy for trainees.
Awareness training advice that focuses on grammar errors is now actively misleading. Modern training should focus on what the email asks for and whether the request matches the legitimate channels.
2. Voice cloning is now consumer-grade
Sub-£30 monthly subscriptions to consumer-grade voice-cloning services can produce convincing speech with as little as 30 seconds of source audio — routinely available from podcast appearances, YouTube interviews, public speaking videos, even voicemail messages. Attackers now use cloned voices for:
- Fake CEO calls to finance teams authorising urgent payments.
- Cloned IT helpdesk voices socially engineering MFA approval.
- Cloned customer voices authorising fraudulent transactions on regulated-services accounts.
3. Real-time video deepfakes are viable in 2026
Following the Arup incident, UK businesses must assume that any senior executive with a moderate public profile (LinkedIn videos, conference talks, media interviews) can be deepfaked on a live video call within reasonable budget. The 2026 reality:
- Real-time face-swap is achievable on consumer GPUs with open-source tooling.
- Modern deepfakes pass casual visual inspection on small video tiles (Teams, Zoom, Meet) at typical resolutions.
- Detection is non-trivial on a moving video call — static-image detection tools don’t apply.
The five most common UK AI-phishing patterns
1. AI-personalised spear phishing
Attacker scrapes LinkedIn for the target’s recent posts, projects and connections. Generates a personalised email referencing genuine context (e.g. “Following our connection at the IFA conference last week…”). Includes a credible reason to click a link or open an attachment.
2. Voice-cloned CEO authorisation calls
Attacker calls the finance team using a cloned CEO voice, asks them to authorise a payment that’s “already in your inbox.” The accompanying email is a look-alike-domain BEC. More on BEC defence.
3. Deepfaked video Teams calls
Attacker schedules a Teams call with the finance team or trustees, joining as a deepfaked CEO/chair. Asks for approval of a transfer or contract. Variants include deepfakes of legal counsel approving a settlement payment.
4. AI-helpdesk impersonation
Cloned voice of internal IT helpdesk calls a user asking them to “approve the MFA prompt I’m about to send,” bypassing MFA via social engineering rather than technical exploit.
5. AI-generated supplier identity fraud
Attacker creates a fake supplier with AI-generated company website, AI-generated employee LinkedIn profiles, AI-generated documents (insurance certificates, accounts, references). The fake supplier wins business via competitive pricing, then disappears with the deposit.
Why traditional defences are insufficient
Most existing UK cyber defences were built for pre-AI phishing. Specific gaps:
- Email filters keyed on grammar / formatting: Legacy spam filters and even some advanced phishing engines still partially key on linguistic markers that no longer apply.
- Awareness training focused on grammar errors: See above.
- Voice-only verification: “I’ll call them to verify” only works if the caller-ID is verified, the voice can’t be cloned, and the caller can’t be coerced. None of these hold by 2026.
- Single-channel approvals: A payment authorised by an apparent CEO email plus an apparent CEO phone call now requires three or four independent verification channels.
- Static identity verification: Photo-ID checks and even live selfie checks can be deepfaked with consumer tooling.
What works in 2026: layered defences for AI-driven attacks
Technical layer
- Modern AI-aware email security (Sublime Security, Material, IRONSCALES, Egress, Tessian) trained on language-model output, not just signature- and grammar-based detection.
- DMARC at p=reject for your own domain to stop look-alike-domain spoofing.
- Phishing-resistant MFA (FIDO2 hardware keys or platform passkeys) for any account with payment authorisation power. SMS and push-notification MFA are bypassable via prompt-bombing and SIM-swap.
- Look-alike domain monitoring for proactive detection of typo-squat registrations.
Process layer (the most under-invested)
- Two-channel verification protocol: Any payment, bank-detail change or sensitive data request must be verified on a second, pre-arranged channel — not on a channel chosen by the requester.
- Pre-shared verbal challenge codes: Senior executives and finance teams agree on a rotating challenge code that must be exchanged on any voice call requesting payment authorisation. The code is renewed quarterly and is never sent over email.
- Cooling-off rules: Any urgent payment received after a defined daily cut-off waits until the next morning unless verbally re-authorised by two named people.
- Out-of-band CEO confirmation: Whenever a CEO appears to authorise a payment via email or video, the finance team contacts the CEO via a different channel (their personal mobile, an in-person check) before action.
- Bank-details verification: Bank account details for suppliers are only changed via a documented multi-channel verification, never by email alone.
Human / awareness layer
- Updated awareness training reflecting 2026 phishing realism. Drop the “spot the bad grammar” module entirely.
- Deepfake briefings for senior executives and finance teams with examples from real incidents.
- Tabletop exercises simulating deepfake CEO calls for finance teams.
- Public-profile audit for senior executives: how much voice and video material is publicly available? Where? What’s the realistic deepfake risk?
Insurance layer
- Cyber insurance with explicit social-engineering cover and realistic BEC sub-limit. Cyber insurance UK guide.
- Confirm the policy covers losses from deepfake-enabled authorisation, not just classic email-based BEC.
- Confirm the policy doesn’t exclude losses where verification protocols weren’t followed — this is becoming the most common rejection reason.
Detecting deepfake video calls in real time
If you suspect a video call may be a deepfake, indicators that still help in 2026:
- Ask for unexpected physical actions: “Can you turn 90 degrees and look at the wall behind you, then look back?” Real-time deepfakes struggle with significant profile rotation.
- Hand-to-face actions: “Could you hold a piece of paper in front of your face for a second?” Most consumer deepfakes show distortion or artefacts when objects pass over the face.
- Unprepared specifics: Reference a recent shared experience that wouldn’t be on social media or LinkedIn (“What did we eat at the team lunch on Friday?”).
- Channel switching: “Can you call me back on your mobile in 5 minutes?” A real CEO can; an attacker cannot.
- Lighting and shadow inconsistencies: Deepfaked faces sometimes light differently from the body and surrounding scene.
None of these are fool-proof, and detection capability degrades as deepfake quality rises. The robust defence is the process layer above — verification on a separate channel — not real-time deepfake spotting on the call itself.
Frequently Asked Questions
Yes — both confirmed publicly-disclosed incidents and a much larger volume of unreported cases. The 2024 Arup Hong Kong incident ($25m loss via deepfake video Teams call) was the most publicised, but UK Action Fraud and NCSC have confirmed multiple UK voice-deepfake incidents in 2024–25, primarily targeting CFOs, finance directors and high-net-worth individuals. The conservative working assumption for 2026: any UK senior executive with a moderate public profile is deepfake-able, and the cost to attackers has dropped to low single-digit thousands of pounds for credible production.
Realistically, you can’t reliably distinguish AI-generated phishing from human-written phishing in 2026. Modern generative models produce grammatically perfect, contextually accurate, personally-tailored content. AI-detection tools have low accuracy and high false-positive rates, especially on short business emails. Don’t train staff to “spot AI writing” — instead, train them to verify the request itself: does it match the way this person normally communicates? Does the request follow the established process? Has the bank-details change been verbally verified on a known number? Process-based verification is the only defence that scales.
No — the marketing and brand value of executive social presence usually outweighs the deepfake risk, and the audio/video for cloning is increasingly available from press appearances, conference talks and podcast guesting that you’d struggle to suppress anyway. The defensive answer is to assume the deepfake exists and design verification protocols accordingly. Specifically: pre-shared verbal challenge codes for any payment authorisation, two-channel verification for high-value approvals, and an explicit policy that no payment is ever authorised on a single video or voice call alone, regardless of how convincing.
Most modern UK cyber policies cover social-engineering fraud including deepfake-enabled variants, but the wording matters. Watch for: explicit inclusion of voice and video impersonation (some policies still only cover email), realistic sub-limits (BEC sub-limits are usually £100k–£500k well below the headline limit), exclusions where verification procedures weren’t followed (becoming the most common rejection reason), and inclusion of attacks where the impersonation occurred via a third-party platform (Teams, Zoom, WhatsApp). Confirm in writing before relying on the cover.
A pre-shared verbal challenge code rotated quarterly, exchanged on any high-value payment authorisation call — combined with a strict policy that no payment is ever authorised on a single video or voice call regardless of how convincing. The challenge code is never sent over email and is known only to senior executives and the finance team. If a deepfake CEO calls and can’t produce the current code, the request is rejected and escalated. This single control defeats the Arup-style attack pattern entirely, and costs nothing.
Roughly £3–£6 per user per month on top of a standard cyber stack. The premium pays for AI-aware email security (Sublime, Material, IRONSCALES) replacing legacy signature-based filters, FIDO2 hardware keys for high-privilege users (one-off £30–£60 per key), and an updated awareness training subscription with deepfake content. Process changes (verbal challenge codes, two-channel verification) cost nothing but require executive sponsorship to enforce. Total uplift for most UK SMBs: £1,500–£5,000/year — a small fraction of an average BEC loss.
Want a UK deepfake-readiness assessment for your business? Request a free AI-phishing readiness review — we’ll cover your verification protocols, executive deepfake exposure, AI-aware email security stack and cyber-insurance social-engineering wording. See also our best UK cyber security companies guide.
Related Reading
More from the Connection Technologies blog.
