I’ve been thinking a lot about the risks of using AI. We all know the benefits. It can speed up our work and find new insights. But what happens when it goes wrong? A while back I looked at a case where Deloitte Australia had to refund a client. The suspicion was that an AI hallucinated sources in a report.
This isn’t just a hypothetical problem. It’s a real financial risk. And it’s a risk that the insurance industry is deciding it no longer wants to take on. This has massive implications for accounting and finance professionals managing enterprise risk.
Insurers Are Actively Excluding AI
There’s a fundamental shift happening right now. Major insurers are formally moving to exclude artificial intelligence from standard corporate policies. They are filing for new “Absolute AI Exclusions” in their Directors & Officers (D&O)and Errors & Omissions (E&O) policies.
This isn’t just a few small players. We’re talking about industry giants like:
- AIG
- WR Berkley
- Great American Insurance Group
An “Absolute AI Exclusion” is exactly what it sounds like. It’s broad language designed to eliminate coverage for any claim arising from the use of AI. This could be anything from an AI-powered marketing campaign that goes wrong to a major oversight failure. The insurers are signaling that the risks associated with AI are too unpredictable to cover.
Why is AI Suddenly Uninsurable?
I dug into the reasons why insurers are backing away. It boils down to a few key problems that make it almost impossible to price the risk using traditional models.
- The Black Box Dilemma: We can see the data we put into a large language model. We can see the output it gives us. But we can’t draw a straight mathematical line to show how it reached its conclusion. There’s a level of randomness built in. For an insurer that’s a nightmare. They can’t determine causality when something goes wrong.
- Systemic Risk Concentration: Thousands of companies are using the same foundational models from a handful of providers like OpenAI or Anthropic. A single flaw in ChatGPT could trigger a massive wave of claims across the entire economy at the same time. It’s similar to why insurers have pulled out of Florida. One big hurricane can wipe them out. One big AI flaw could do the same.
- A Legal and Regulatory Vacuum: AI is advancing much faster than the legal system. There is very little case law to define who is liable when an AI causes harm. Is it the developer? The company using it? This uncertainty makes it impossible for insurers to model future losses accurately.
The Risk is Now on Your Balance Sheet
This is where it gets critical for finance and accounting professionals. When an insurer excludes AI risk the risk doesn’t just disappear. It moves from the insurer’s balance sheet directly onto your company’s.
Accounting Alert: ASC 450 Triggers
This shift triggers immediate issues under ASC 450 (formerly FAS 5) for contingent liabilities. Without insurance coverage, accountants must now ask two critical questions about the company’s use of AI:
- Is a financial loss from an AI failure probable?
- Can the amount of that loss be reasonably estimated?
Answering these questions is incredibly difficult. The same uncertainties that scare insurers now fall on the accounting department. How can you estimate a probable loss when there’s no legal precedent for AI-related lawsuits?
This isn’t just about AI used in products sold to customers. It could also apply to internal AI use. If your finance department uses AI for analysis and it leads to a material misstatement there’s a real uninsured risk there. You may need to assess if a contingent liability needs to be booked.
The Path Forward
Navigating this new landscape requires a proactive approach. The risk is real and it has to be managed. This means finance, legal, and IT departments need to work together.
Here are the key steps:
- Identify and Assess: You need a clear inventory of all AI use across the organization. This includes third-party tools and internal applications. Each use case needs to be assessed for its potential financial and legal risks.
- Quantify and Disclose: This is the hard part. Finance teams need to start building models to quantify this new uninsured risk. It’s a material risk that investors and stakeholders need to understand.
- Mitigate and Govern: Companies must create robust internal controls and governance frameworks for AI. This includes clear policies on how AI can be used who is responsible for oversight and how outputs are verified.
Key Takeaways
The insurance industry’s withdrawal from AI is a major turning point in corporate risk management. It’s no longer just a tech issue. It is now a core financial and accounting challenge.
- AI is becoming uninsurable. Major insurers are adding “Absolute AI Exclusions” to standard corporate policies.
- The risk is too unpredictable. The “black box” nature of AI systemic risk and legal uncertainty make it impossible for insurers to price.
- The risk falls to the company. Without insurance the full financial liability for AI failures sits on the company’s balance sheet.
- This triggers ASC 450. Accountants must now assess the company’s AI usage for potential contingent liabilities.
- A proactive plan is essential. Companies must identify quantify and govern their AI risk to protect their financial health.
Want to earn CPE for this topic?
1. Compare Options: See how we stack up against others in our 2025 Flexible CPE Guide
2. Understand the Format: Read how Nano-Learning works for CPAs.
3. Check Your State: Ensure you are compliant with our State Requirements Guide.


Responses
[…] Course: The AI Insurance Gap […]
[…] a snooze-fest. Look for relevant scenarios that address modern ethical dilemmas, such as the uninsurable risks of AI failures, data privacy responsibilities, and upholding professional standards in a fast-paced, remote-first […]