Skip to main content

Article

Debunking Generative AI myth #2: Its most pertinent risks

While generative AI is an emerging technology, it is one that is being used for everyday business practices across almost all functions, both back-end and client facing.

Organizations of all sizes, across virtually every industry, are exploring how to optimize generative AI technology to achieve business objectives, including realizing operational efficiencies, increasing client satisfaction, and developing new products and services. To sustainably capitalize on generative AI’s potential upside, companies must be aware of and prepare for potential downsides.

At Marsh, we help organizations across industries understand, measure, manage, and respond to generative AI risks. In doing so, we have helped risk leaders and senior executives address three common myths. First, we looked at who in your company “owns” generative AI risk.

In this article, the second in a three-part mini-series, we’ll explore:

Myth 2: Generative AI is an emerging technology, and therefore cyber and technological risks are the most pertinent to consider.

While generative AI is an emerging technology, it is one that is being used for everyday business practices across almost all functions, both back-end and client facing. Uses range from marketing teams helping generate creative content, to software engineering teams expediting code, to sales teams providing hyper-personalized client recommendations, to legal teams expediting research and analysis.

This means that generative AI-related risks encompass—and span beyond—typical cyber and technology risks, and include:

  • Physical bodily injury/property damage resulting from generative AI-provisioned product safety advice to clients
  • Financial loss/personal and advertising injury, such as defamation, that stem from inaccuracies/misleading AI-generated content in client settings
  • Inadvertently infringing on existing copyrights, patents, or trademarks in training generative AI models
  • Bias/discrimination allegations related to personalized experiences
  • Bias/discrimination allegations related to employment practices
  • AI system failure/cyberattack that leads to business interruption
  • Data/privacy breaches when using generative AI to generate personalized recommendations for clients
  • Wire transfer fraud resulting from AI-enabled hyper-realistic deepfakes

The corresponding, relevant lines of insurance for both developers and end-users of generative AI technology potentially spans virtually all lines of commercial insurance: cyber, tech errors and omissions (E&O), media liability, directors and officers (D&O), employment practices liability, intellectual property, commercial general liability, product liability, and more.

Developers and users of generative AI technologies alike should assess whether their risk management program is sufficient to address relevant exposures. Lately, we have seen many organizations that had not previously been materially exposed to technology liability considering tech E&O coverage as they launch new businesses enabled by generative AI, such as using data to provide clients with improved analytics and insights. Such new business can create E&O liability exposures as technological services and/or products are provided to clients for a fee.

To assess whether existing insurance programs are considered sufficient, and to build resiliency against generative AI risks, organizations can:

  • Inventory all models that are being used in an enterprise context and corresponding high-level use cases.
  • Build risk scenarios that reflect what could realistically go wrong, informed by inventory of models and corresponding use cases, classified into a spectrum of high frequency/low severity vs. low frequency/high severity scenarios. 
  • Map risk controls against each prioritized risk scenario and conduct broader resilience planning.
  • Quantify the potential impact of these risk scenarios on the existing insurance program, assessing whether existing limits and retentions need to be adjusted.
  • Map key risks to a strategic insurance program across relevant lines.

Given its wide applicability across the enterprise, generative AI presents more than just cyber and technology risks. Developers and users of generative AI technologies should assess their existing risk mitigation and transfer strategies on an iterative basis, especially as generative AI continues to advance.

To learn more about how a Marsh specialist can help your company navigate generative AI and its risks and opportunities, please contact us here.

Next up: Myth #3 looks at the broad insurance issues related to generative AI

Our people

Jaymin Kim

Jaymin Kim

Managing Director, Emerging Technologies, Global Cyber Insurance Center

  • United States

Greg Eskins

Greg Eskins

Global Cyber Product Leader and Head, Global Cyber Insurance Center

  • United States

Related insights