AI Advancements Are Reshaping Cyber Insurance Coverage

Forward-thinking businesses aren’t the only ones using artificial intelligence (AI) to work smarter and move faster.
Threat actors are turning to AI to enhance their social engineering tactics, like deploying convincing (automated) phishing emails or creating deepfakes that mimic the voices and faces of trusted colleagues. And as more businesses look to implement AI systems to improve their own productivity, threat actors are eager to poke for exploitable weaknesses in this new technology.
The Wild West of generative AI is here. So, as “bad guys” optimize attack methods, how can everyone else reduce their risk? One answer is forward-thinking insurance coverage that addresses sophisticated AI-powered attacks and enhanced cyber risks.
Below, we’ll examine why AI-related cyber incidents necessitate the evolution of cyber insurance policy language and how to determine if your coverage adequately meets today’s risks.
Social engineering is on the rise
Phishing emails have skyrocketed by 856% over the last several years with the help of large language models (LLMs), like ChatGPT.
Social engineering scams have been around since the dawn of the web, but tell-tale signs like poor grammar and formulaic messages (the infamous Nigerian prince) are on the way out in favor of AI-enhanced communications. Threat actors can now personalize messages quickly by using AI to scrape social media pages and corporate websites, tailoring information and tone to specific users.
And with AI, they can do so at scale. LLMs automate the entire process by crafting emails, identifying targets, and collecting information, ultimately cutting the cost of deploying scams by up to 95%.
Phishing emails have skyrocketed by 856% over the last several years with the help of large language models (LLMs), like ChatGPT.
Threat actors are also turning to deepfake technology to manipulate images, audio, and video recordings. Last year, an employee at a multinational finance firm sent $25 million to threat actors after “meeting” with the company’s supposed chief financial officer in a conference call. In another well-publicized attempted deepfake scam, threat actors impersonated the CEO of a large advertising group in a Microsoft Teams meeting, in order to try to solicit money and personal details from an agency leader.
Not all cyber insurance coverage is built to address the escalating risk of AI-fueled social engineering. Losses arising from deepfakes can land in a coverage “gray area” between cyber and crime insurance.
Cyber insurance doesn’t always include coverage for impersonation fraud, and with the rise of deepfakes, some insurance providers are moving to include explicit exclusions for these incidents. While enhancements in crime insurance coverage have occurred to cover social engineering losses, not all policies have broad “all-risk” language which could leave deepfakes as a potentially unprotected avenue of fraud.
AI chatbots are vulnerable to attacks
To find answers when browsing the web, 68% of people have turned to an AI chatbot. From retailers to hospitals, more and more businesses are implementing virtual assistants for lead generation, customer engagement, and 24/7 availability.
Most customer support chatbots operate with guidelines that keep provided outputs relevant. However, LLMs cannot reliably distinguish between malicious user input and system instructions.
Cleverly crafted prompts from an attacker can result in the chatbot revealing sensitive information not intended to be shared. For this reason, the Open Worldwide Application Security Project (OWASP) ranked prompt injection as the number one security AI risk in 2025.
Consider this: A hospital creates a customer service chatbot using AI. Patients send queries and the system accesses internal databases to answer them. But a threat actor sends a prompt injection that tricks the system into sharing sensitive patient health information. The hospital now has a security failure that likely requires a digital forensics investigation, legal counsel, and patient notification.
Without clear policy language, traditional cyber coverage may fall short if an AI model resulted in a security failure or privacy breach. And as a result of the above prompt injection, businesses would ideally also want to have both first-party and third-party coverage that addresses direct financial losses for both investigation costs and liability as a result of the breach.
Businesses need adequate protection against AI risk
The evolution of AI necessitates that cyber insurance adapts rapidly to address potential gaps in coverage. What should businesses look for in their existing policies to stay protected against AI risk?
Find explicit language on new threats
Insurance traditionally moves slow. But given the prevalence of AI today, many businesses are rightfully searching for explicit coverage and pushing insurance providers to act. Simultaneously, exclusions are being drafted as a knee-jerk reaction to losses associated with AI.
Businesses should consider their specific risk profile, AI usage, and other security controls to determine coverage needs:
Do they heavily rely on third-party AI systems?
Have they experienced business email compromise before?
Does their business have its own public-facing chatbot?
Depending on the answers to the above, direct policy language and how much coverage is provided can play an important role in deciding on the right risk mitigation options.
Implement security controls to reduce risk
Multi-factor authentication: By requiring a secondary authentication method to log in, businesses can add a secondary line of defense to prevent account compromise. In the era of AI-fueled attacks, FIDO-2, which uses biometric factors for authentication, is the gold standard when it comes to MFA.
Limit employee access: By assigning permissions based on role, businesses can reduce the potential impact of a compromised account following a phishing attack. Additionally, businesses should apply that same logic to LLMs. LLMs should only have access to data sources they need to perform necessary functions.
Security awareness training: Security awareness training can empower employees to identify phishing attempts and help businesses avoid costly cyber attacks. In fact, at least one source found that 80% of businesses said employee education reduced phishing susceptibility.
LLM proxy: Sending user data directly to a LLM without any safeguards can increase an organization’s risk for a data breach. An LLM proxy sits between a business’s application and the LLM provider (like OpenAI) and inspects each query to enforce security policies.
Prioritize hands-on cyber claims teams
If a business believes an employee may have clicked on a malicious link, speed matters. Yet, many businesses hesitate to report issues to their insurance provider in an attempt to investigate independently and avoid a claim.
The bright side: Many claims teams want to help businesses avoid losses, too.
If an employee fell for a deepfake video of the CEO requesting payment for an urgent project and sent $500,000 to a criminal-controlled bank account, it’s not too late to get the money back. Experienced cyber claims teams may be able to claw back the funds with the help of government agencies.
For example, in 2024, Coalition successfully put $31 million directly back in policyholders’ pockets through clawback efforts.
Cyber coverage built to address emerging risks
Given the current reality of digital risk, there has never been a greater need for forward-thinking cyber insurance. Coalition’s Active Cyber Policy addresses evolving digital threats with explicit and affirmative coverage:
Artificial Intelligence-Related Security Events: Including protection against deepfake-enabled fraud and AI-caused security failures.
SEC Cybersecurity Disclosure Requirements: Coverage for legal expenses related to materiality assessments and regulatory filings under new SEC rules.
Expanded Definition of Privacy Liability: Third-party privacy coverage includes violations of privacy law, extending protection beyond just violations of the policyholder's own privacy policy to address the risk of employees potentially sharing sensitive data with third-party LLMs.*
In addition to expanded protection, Coalition’s Active Cyber Policy offers advantages for security-conscious policyholders, like Vanishing Retention. By addressing new risks in policy language and rewarding policyholders for their quick action, Coalition is setting a new standard in cyber insurance.
INNOVATIVE COVERAGE. EXPANDED PROTECTION.
Meet the Next Generation of Active Insurance
Explore Coalition’s new Active Cyber Policy >