Major U.S. insurance companies are taking a significant step back from artificial-intelligence exposure, signaling a major shift in how the industry views the rapidly growing AI ecosystem. Firms like AIG, Great American Insurance Group, and WR Berkley have formally asked state regulators for permission to exclude AI-related liabilities from many of their commercial insurance policies. Their filings describe AI outputs as too unpredictable, opaque, and difficult to price, making traditional underwriting nearly impossible.
Insurers say the biggest issue is the “black-box” nature of modern AI models. They often cannot explain why an AI system generates a specific output, which makes it extremely hard to determine who is responsible when the technology causes harm. As AI adoption accelerates—especially generative AI tools and large-scale agentic systems—insurers warn that the risk landscape is changing faster than traditional actuarial models can keep up.
At the same time, real-world losses tied to AI failures and misuse are piling up, offering early evidence that these risks are not theoretical. One major case involves Google, which is facing a lawsuit seeking between $110 million and $210 million from Minnesota-based Wolf River Electric. The dispute stems from Google’s “AI Overview” feature, which incorrectly claimed the company was being sued by the state attorney general for deceptive practices. Although the information was false, the claim was displayed as authoritative—demonstrating how AI hallucinations can create substantial legal exposure for businesses.
Another widely discussed example occurred in Canada, where Air Canada argued in a tribunal that it should not be held responsible for a bereavement discount its customer-service chatbot wrongly created. The airline insisted that the bot was “a separate legal entity” responsible for its own actions. The tribunal firmly rejected that defense, ordering the airline to honor the discount and setting a precedent that companies cannot delegate accountability to their automated systems.
The most dramatic recent incident involved advanced deepfake fraud targeting global engineering firm Arup. Criminals used AI-generated video and audio to convincingly impersonate senior executives during a live video conference, ultimately stealing $25 million. The event shocked the insurance market because it showed how convincing and scalable AI-enabled fraud has become — and how easily even sophisticated organizations can fall victim.
All of these examples have raised enormous concerns in the underwriting community. But what truly alarms insurers is not any single event — it is the potential for systemic, simultaneous mass losses triggered by one flawed or compromised AI model. An executive from global brokerage Aon described the risk bluntly:
“We can handle a $400 million loss to one company. What we can’t handle is an agentic AI mishap that triggers 10,000 losses at once.”
This fear of correlated losses has pushed insurers to adopt sweeping exclusions. WR Berkley has proposed some of the broadest measures yet, including exclusions that bar claims related to any actual or alleged use of AI, regardless of whether the model was company-owned, third-party, licensed, or embedded in software tools. Berkley also introduced what it calls an “absolute AI exclusion” across directors and officers (D&O) policies, errors and omissions (E&O) coverage, and fiduciary liability products.
The exclusion language is remarkably expansive. It aims to remove coverage for AI-generated content, failure to detect AI-created materials, poor oversight of AI systems, AI-driven operational errors, and even regulatory investigations involving AI technologies. For many businesses, this would eliminate coverage for some of the most common ways AI is currently used — including customer service chatbots, automated decision systems, generative content tools, and algorithmic compliance systems.
Other insurers are taking a more moderate approach, offering limited AI coverage with strict caps and caveats. QBE, for instance, has extended coverage for fines issued under the EU AI Act, but at no more than 2.5% of total policy limits. Chubb has reportedly agreed to insure certain AI-related risks, but excludes losses that occur across multiple clients at once — essentially carving out exactly the kind of large-scale systemic failures that insurers fear most.
Meanwhile, Verisk — one of the largest creators of standardized policy forms in the U.S. insurance market — plans to introduce new general liability exclusions for generative AI starting in January 2026. Because insurance carriers nationwide often adopt Verisk’s templates, this change could rapidly shape the entire market and make AI exclusions mainstream within just a few years.
Legal experts say the industry’s retreat could trigger more litigation, as companies try to argue that their losses were not truly AI-related or were caused by human oversight failures instead. According to Aaron Le Marquer, a leading insurance disputes lawyer, insurers may resist paying AI-related claims until a major systemic event forces the courts to determine whether such technology-driven incidents were ever meant to be covered in the first place.
He warns that a large-scale AI failure — similar in scale to a major cyberattack or a financial-system shock — may be required before insurers and regulators are compelled to clearly define how AI losses should be handled. Until then, there may be a growing mismatch between how businesses use AI and how insurers view related liabilities.
For companies building or deploying AI, these developments carry serious implications. Many businesses buy D&O, E&O, cyber, and general liability insurance assuming they are protected against emerging technology risks. But if policies increasingly exclude AI-driven events, companies may discover they are far more exposed than they realized.
This could force organizations to strengthen their AI governance programs, implement more rigorous audits, and deploy monitoring systems to track how AI models behave over time. Some firms may need to negotiate custom endorsements, seek specialty AI liability coverage, or rethink how they roll out new automated technologies.
The insurance industry’s shift reflects a broader reality: AI has introduced a category of risk that is both unpredictable and globally connected. A single widely-used model could malfunction, be compromised, or generate harmful content that affects thousands of companies at once. For insurers — whose business depends on being able to measure and distribute risk — this is a scenario they are not yet prepared to absorb.
The latest regulatory filings mark one of the clearest signs yet that insurers believe AI could reshape corporate liability in ways the industry has not seen in decades. As AI adoption surges, the market is now entering a pivotal period in which companies, policymakers, and insurers must determine who is responsible when AI goes wrong — and who pays for the damage.






