Skip to main content

Article

Debunking Generative AI myth #3: GenAI insurance issues

Generative AI isn't a completely new risk; instead, it extends and amplifies existing, familiar risks.

Organisations of all sizes, across virtually every industry, are exploring how to optimise generative AI technology to achieve business objectives, including realising operational efficiencies, increasing client satisfaction, and developing new products and services. To sustainably capitalise on generative AI’s potential upside, companies must be aware of and prepare for potential downsides.

At Marsh, we help organisations across industries understand, measure, and manage generative AI risks. In doing so, we have helped risk leaders and senior executives address three common myths. First, we looked at who in your company “owns” generative AI risk. Next, we considered generative AI risks beyond cyber and technology exposures.

In this article, the third in a three-part mini-series, we’ll explore:

Myth 3: Generative AI is an emerging technology, and therefore must require new, standalone AI insurance policies.

There is a tendency to think that new technologies bring new and/or novel risks — and sometimes that is the case; for example, the internet gave rise to a new class of cyber risks corresponding to the rise of e-commerce. But this has not been the case with generative AI, so far.

Based on Marsh’s extensive analyses to date, generative AI has not created a completely new class of risk. Rather, generative AI risks are extensions of existing, familiar risks, and can act as an amplifier of them. For example:

  • Data privacy and security have been concerns for decades and can be amplified by generative AI systems as they rely on massive volumes of training data, which may contain proprietary and sensitive information.
  • Misuse of technology to generate harmful content has long been associated with social media platforms, and can be amplified by generative AI systems, which can create and distribute new misinformation at far greater scale than any human could.
  • Potential intellectual property rights infringement from content generation is a familiar, historic risk in many industries, and can be amplified as evidenced by unprecedented questions around whether it is legal to train generative AI systems on IP-protected data without appropriate permission.
  • Technological errors have existed since the advent of technology, and can be amplified in the context of generative AI, which inherently comes with hallucinations.

Some insurers have begun to introduce AI-specific endorsements and products in response to the rapid adoption of generative AI since late November 2022. Many of these new solutions refer to “AI” generally, yet AI is a broad, diverse field of technologies that have existed since the 1950s. Indeed, many organisations have used machine-learning (ML) based artificial intelligence within their business operations, for many years.

For example, banks have used ML in risk modelling and fraud detection for more than a decade, and e-commerce platforms have used it for personalised recommendations and supply chain optimisation, also for decades. Should the more recent use of generative AI at such banks and retailers mean they are unable to benefit from coverage within their existing tech E&O policy? Is a data privacy event in association with generative AI—or other form of AI—considered to no longer be a data privacy event? Are bodily injuries in association with generative AI not still bodily injuries?

Generative AI exclusions and coverage

Marsh’s perspective is that the industry would be well served by not introducing exclusions that seek to remove the core coverage of the line of business to which they are attached if generative AI is part of the causal link to loss. There are numerous non-AI specific exclusions in place that continue to apply to generative AI exposures. For example:

  • Cyber exclusions have proliferated across virtually every line of insurance, and may exclude cyber events as triggers for loss, or the use of a computer “as a means for inflicting harm.”
  • Access and disclosure exclusions in general liability policies may restrict or remove coverage for personal and advertising injury, including, for example, libel, slander, defamation, and privacy invasion.
  • Privacy regulation exclusions—including the Biometric Privacy Information Act (BIPA), California Consumer Privacy Act (CCPA), CAN-SPAM Act, and for tracking pixels—are common across many liability lines.
  • Professional services exclusions in certain general liability policies seek to limit or exclude bodily injury, property damage, advertising, or personal injury claims arising out of a “professional service,” which is often undefined.
  • Other exclusions such as those pertaining to government actions, intentional fraud, or deception will continue to apply across many lines.

These existing exclusions continue to apply in the context of generative AI and should be considered, along with articulating what additional information is needed to underwrite, before even considering introducing new exclusionary language.

The applicable coverage that may respond to generative AI risks ought to continue, as always, to depend on the facts of each particular claim or potential claim and applicable policy. This should include, but not be limited to, the impact of a loss, the type of damages being sought, and the parties involved in the loss — subject to applicable exclusions.

The insurance industry has asked if there is a need to create new insurance policies in reaction to almost every new wave of technology, including cloud, autonomous vehicles, and non-fungible tokens. Rather than starting with new products, Marsh believes the insurance industry should start with open-ended questions, such as:

  1. What do we know vs. not yet know about the impact of this technology on respective lines of insurance?
  2. How can we quantify corresponding risks in the absence of historical loss data?
  3. Do we need to update our risk assessment and pricing methodologies?

Generative AI risks may, over time, need to be treated separately. We appreciate that limited loss data can complicate the ability to precisely underwrite and price certain risks, given the relatively new technology. That said, because generative AI, as it stands now, generally presents  nuances within existing and familiar lines of insurance, any amplification of corresponding risks should be underwritten and priced within existing lines, where possible.

This is not a situation unique to generative AI. The insurance industry has confronted such challenges many times and will do so with every new technological development. So far, the insurance industry has consistently risen to the challenge of helping organisations transfer risks and increase resilience with new technologies, from the internet and the cloud to blockchain and digital assets.

Conclusion

Generative AI is the latest, but will not be the last, new technology to pose risk and insurance challenges. As organisations adopt generative AI, it’s important for their leaders to understand not only the potential benefits, but also the possible risks and to stay cognisant of the misconceptions—the myths—that may arise around them.

And as the technological landscape continues to advance, new classes of risk may arise. Generative AI technologies are not surfacing in a vacuum — in parallel, we are seeing advancements in numerous other rising frontier technologies include agentic AI, humanoid AI, quantum computing, and human-brain interface technologies, among others. Inevitably, new technologies will continue to emerge and may sometimes pose new risks. Along with the evolving technological landscape, the insurance industry will continue to play a pivotal, leading role in helping organisations transfer risk and build resilience related to generative AI and other emerging technologies, developing new policies as needed to address material coverage gaps for our clients.

To learn more about how a Marsh specialist can help your company navigate generative AI and its risks and opportunities, please contact us here.

Our people

Jaymin Kim

Jaymin Kim

Managing Director, Emerging Technologies, Global Cyber Insurance Center

  • United States

Greg Eskins

Greg Eskins

Global Cyber Product Leader and Head, Global Cyber Insurance Center

  • United States

Related insights