Do you know the must-know cyber protections for leading insurance advisors? Find out in our latest broker skills session.
Skip To Main Content
Cyber Incident? Get Help
Blog homeCyber InsuranceSecurityExecutive RisksBroker EducationLife at Coalition

Deepfakes Are Making Cyber Scams More Difficult to Detect

Person > Alok Ojha
Alok OjhaAugust 13, 2025
Share:
Coalition Blog-Hero AI Deepfakes

Seeing was believing.

But now, fraudsters are turning to deepfake technology to impersonate the C-suite, job applicants, and even world leaders. With the help of artificial intelligence (AI), cyber criminals are able to manipulate video and audio recordings of someone you trust and, in turn, profit or steal sensitive information.

As a whole, we’re not very good at determining what’s real or not yet. A study published in the Journal of Cybersecurity found that participants could only differentiate between AI-generated and human faces with 62% accuracy. To make matters worse, the use of synthetic voice increased 173% from Q1 to Q4 2024.

Despite these troubling statistics, there’s an underlying silver lining. While deepfakes introduce a need for increased scrutiny, we can apply everything we already know about social engineering to combat the rise of AI-enhanced technology.

How deepfakes are being used for fraud

Impersonating the C-suite

Several high-profile deepfake scams have featured fraudsters impersonating C-suite members through phone calls, voicemails, and even video conferences. Threat actors have long relied on requesting funds under the guise of the CEO or CFO as a tactic to exploit a victim’s trust and convey urgency, but they have traditionally been limited to phishing emails and texts. 

When that same fraudulent request comes from a voice or face you know, it can be a lot harder to trust your gut feeling that it's a scam. Last year, a multinational finance firm sent $25 million to threat actors after a successful social engineering attack, featuring multiple deepfakes. It started with a phishing email. The employee, although initially suspicious, agreed to the conference call where they saw and heard from familiar (deepfaked) colleagues, including the CFO. 

By creating a security-forward culture, businesses can empower employees to ask for verification, slow down, and trust their gut in a moment of doubt. If the victim had either reported the original phishing email or confirmed the transaction with the CFO (over another trusted  communication channel), the suspicious activity could have been flagged sooner and possibly prevented. 

By creating a security-forward culture, businesses can empower employees to ask for verification, slow down, and trust their gut in a moment of doubt.

Calls from IT 

Ransomware actors, like hacker-collective Scattered Spider, are turning to advanced social engineering tactics as a way to gain access to businesses’ internal systems. One approach starts with impersonating IT support staff, giving threat actors another avenue to exploit a victim’s trust. To make catching fraudulent behavior even harder, deepfakes can be used to impersonate a coworker’s voice.

In 2023, a software development company disclosed that 27 cloud customers had been compromised following a social engineering attack that included SMS-based phishing and deepfake technology.

After receiving a text message purportedly from the IT team, an employee clicked a malicious link and submitted their credentials. Using a deepfake of an actual IT employee at the software company, threat actors asked for their authentication code to gain access to internal systems.

Businesses should take steps to further secure their help desks and call centers against sophisticated phishing attacks. To prevent unauthorized access, they can implement multi-step identity verification for account changes and require callers to confirm details like employee ID or security questions.

Fake job applicants 

Threat actors are now leveraging the hiring process as a way to potentially access sensitive information or deploy malware.

With deepfakes, impostor candidates are interviewing (and even getting hired) for remote jobs. In a survey of 1,000 hiring managers across the United States, 17% said they had already encountered candidates using deepfake technology to alter their video interviews.

Last year, a cyber firm hired a software engineer on its internal IT team that ended up being a malicious actor from North Korea. The “employee” passed several rounds of interviews, reference emails, and a background check. Once they received their workstation, they immediately downloaded malware, which the business’s endpoint detection and response (EDR) tool flagged.

With the help of deepfake technology, threat actors aren’t just targeting your employees. They are trying to become one.

Hiring teams now need to watch for potential fraudulent applicants, necessitating more stringent background checks, properly vetted references, and strengthened authentication processes. 

New technology, same tried-and-true tactics

Businesses are already reckoning with an uptick in traditional social engineering, like phishing. In the last 12 months, malicious emails increased by 856%, according to SlashNext. Fueled by large language models, like ChatGPT, threat actors are able to automate the process of crafting emails, targeting victims, and collecting information. 

Deepfakes only amplify the problem, especially as fraudsters turn to phone and video calls to legitimize their presence in the inbox. As seen in the examples above, traditional social engineering tactics aren’t going anywhere. Deepfakes are just pushing victims to question everything they see and hear, while threat actors use the same tried-and-true behaviors to trick us:

Exploiting trust

Both the employee at the finance firm and the employee at the software company were skeptical of the initial phishing attempt. It wasn’t until they heard or saw someone they “knew” that they gave threat actors what they wanted. 

Applying scrutiny, like: “Should I call the CFO separately to confirm this is legitimate?" or “Would I ever need to give an authentication code over the phone?” can help prevent attacks from escalating. 

Conveying urgency

Threat actors often mention consequences if a certain task isn’t completed in time, like an account closing if an invoice goes unpaid. By urging victims to act fast, whether in an email or through a (deepfake) phone call, they hope that the ticking clock will impede their target’s ability to validate the message. 

But if employees take some extra time, they may catch signs of fraudulent activity before it’s too late. For example, the cyber firm states that they should have conducted reference checks over the phone instead of through email as an additional layer of protection. 

Capitalizing on curiosity

Scammers rely on exciting headlines to tempt victims to click malicious links. Now, with deepfakes, they can use celebrities and politicians to spread false information or too good to be true opportunities, like an Elon Musk investment scam.

Protect your business from deepfake scams

1. Learn to spot deepfakes

While there’s not one definitive way to determine that a video is a deepfake, there are several tell-tale signs that a video may be manipulated. When in doubt, ask the following:

  • Are they blinking too much? Are they not blinking at all?

  • Do their lip movements look natural?

  • Are shadows appearing where you would expect?

Also, rely on your gut and what you know about social engineering. Awareness is the best defense against social engineering attacks, especially as AI-enabled phishing and deepfake scams raise the stakes. 

Awareness is the best defense against social engineering attacks, especially as AI-enabled phishing and deepfake scams raise the stakes. 

Security awareness training teaches employees the common red flags associated with scams, as well as the best way to report and escalate suspicious behavior within your organization. Training programs can reduce the risk posed by employee mistakes by 83%. 

2. Implement access controls to minimize damage 

To help address the risk of compromised credentials through social engineering attacks, organizations should implement multi-factor authentication (MFA) on email, cloud storage, and other vital technologies. MFA is a process that requires two or more forms of verification to access a system, application, or account. 

By requiring additional authentication factors, like “something you have” (a smartphone) or “something you are” (a fingerprint) in addition to “something you know” (a password), attackers can’t achieve their goals with compromised credentials alone.

Beyond this, businesses should consider turning to FIDO2, which uses biometric factors or hardware keys to tie authentication with the user's device. While threat actors can potentially trick employees into sharing an authentication code over the phone, biometrics are much harder to bypass. 

3. Make sure you’re adequately covered

When all else fails, you’ll want to make sure your cyber insurance coverage protects your business from losses that result from deepfake technology. Not all cyber insurance coverage is built to address the escalating risk of AI-fueled social engineering. 

When all else fails, you’ll want to make sure your cyber insurance coverage protects your business from losses that result from deepfake technology.

Some insurance providers are also looking to exclude deepfakes and other AI-related incidents. Others may not explicitly cover deepfake incidents, landing businesses in a coverage “gray area” if employees fall victim to a successful scam. 

Deepfakes aren’t going away anytime soon. Businesses should look for clear policy language relating to AI risks and adequate limits. 

Can your employees’ distinguish between real and fake? 

Challenge your employees to spot cyber scams with the Deepfake Spotting Exercise, one of the 200+ short videos and real-world simulations offered with Coalition Security Awareness Training. Sign up for a free trial now.*


LEVEL UP SECURITY CULTURE. PROTECT YOUR BUSINESS.

Boost Your Employee Cyber Knowledge  

Start your Security Awareness Training Free Trial >


This blog post is designed to provide general information on the topic presented and is not intended to construe or render legal or other professional services of any kind. The views and opinions expressed as part of this blog post do not necessarily state or reflect those of Coalition. The reader is cautioned to consult independent professional advisers and formulate independent conclusions and opinions regarding the subject matter discussed herein. Coalition is not responsible for the accuracy or completeness of the contents herein and expressly disclaims any responsibility or liability based on any legal theory or in any form or amount, based upon, arising from or in connection with, for the reader’s application of any of the contents herein to any analysis or other matter, nor do the contents herein guarantee and should not be construed to guarantee any particular results or outcome. Any action you take upon the information contained herein is strictly at your own risk. Coalition and its affiliates will not be liable for any losses and damages in connection with our use or reliance upon the information. The blog post may include links to other third-party websites. These links are provided as a convenience only.
*Security Awareness Training is provided by Coalition Incident Response Inc. dba Coalition Security, an affiliate of Coalition, Inc. New customers may access training for 15-days for free. Customers who subscribe to training for a 12-month period will be billed within the first 30 days from enrollment. Customers can opt to non-renew before the end of the 12-month period. No refunds permitted. Limitations apply. See Terms for more details.
 Copyright © 2025. All rights reserved. Coalition and the Coalition logo are trademarks of Coalition, Inc.

Tags:

PhishingCyber ThreatsCoverage

Related blog posts

See all articles
Security

Blog

Attackers Actively Targeting Critical Vulnerability in SonicWall SSL VPN

Coalition notified policyholders about an unpatched vulnerability in SonicWall SSL VPNs that is being actively exploited by threat actors in the wild.
Scott WalshAugust 07, 2025
Security
Security