Skip to main content

Webcast

Two years after ChatGPT: The evolving world of generative AI, risk, and insurance

It is important to understand how generative AI can amplify existing risks, as well as the related shifting legal, regulatory, and insurance implications.

Since the 2022 launch of ChatGPT and other generative artificial intelligence (AI) technologies, the AI landscape has continued to develop, with new AI variations, including multimodal, agentic, and humanoid technologies.

Although often conflated with traditional AI, generative AI is just one subcategory of AI that primarily employs deep learning models to generate new content across various domains.

As organizations seek to manage the risks associated with generative AI, it is important to understand the ways that it can amplify existing risks, as well as the shifting legal, regulatory, and insurance implications related to its use. Some best practices and lessons have emerged in the past few years to help organizations navigate this formative phase.

Not all AI is the same, nor is it consistently defined 

AI refers to a diverse field of technologies that have existed since the 1950s. AI is broadly defined as computer systems that can simulate human intelligence, but there is variation in how these systems work and are used. For example:

  • Predictive AI uses machine learning and statistical techniques to analyze historical and real-time data to forecast future outcomes, events, or behaviors.  
  • Generative AI uses advanced deep learning techniques to create new, original content across domains, including data analysis and synthesis, content creation, and scenario simulation.
  • Agentic AI executes tasks and make decisions autonomously, without fixed rules or predefined outcomes, such as AI personal assistants or certain automated business operations.
  • Humanoid AI interacts with and autonomously adapts to the physical world, or has a physical entity, such as assistive robots, healthcare robotics, or industrial automation.

Across these various subcategories of AI, there are some similarities but also critical differences in the kind of risk management and transfer solutions required to mitigate corresponding risks. Another important consideration is who is using which kind of AI and for what purpose. For example, the risks can differ for an AI developer versus an end user.

This article will focus on generative AI specifically, as it is a relatively new form of AI that organizations are quickly adopting at scale. Agentic AI and humanoid AI are even newer forms of AI, for which the rate and scale of enterprise adoption remains to be seen.

Generative AI as a risk amplifier  

Marsh’s extensive analyses to date suggest that the world has yet to observe a completely new category of risk that transpires from generative AI. Rather, generative AI may amplify existing risks, including cyberattacks, data privacy, intellectual property (IP), and the spread of misinformation.

Some of the early manifestations of generative AI risks include:

  • Hyper-realistic deepfakes that lead to wire transfer fraud
  • Customer privacy infringement in training generative AI systems with customer data, without appropriate consent
  • Data leakage as end users inadvertently transmit sensitive data in prompts
  • Inadvertent IP infringement in utilizing generative AI tools that may produce content that allegedly infringes on third-party IP rights
  • Chatbots that hallucinate, or yield erroneous outputs, providing incorrect information to customers about various products and services

Nascent legal and regulatory landscapes

As the AI landscape develops, so too does the legal and regulatory environment, including emerging questions around IP, unfair competition, data privacy, and libel, in some cases with limited precedent. For example, one of the most common issues is whether a generative AI model trained on copyrighted material constitutes infringement or unfair use. In the US, various court rulings have considered this question, with some dismissed, others upheld, and many pending. The US Copyright Office’s Copyright and Artificial Intelligence Report offers a helpful, early framework for understanding the emerging IP landscape in the US and beyond. It’s important to recognize that there are also relevant non-AI specific laws and regulations that continue to apply in the context of AI, including data privacy regulations.

Globally, some high-profile cases have brought the issue further into focus, such as France’s Autorité de la concurrence fining Google €250M for using news publishers’ content to train a generative AI model. Looking ahead, more cases like this one are expected to add to the legal conversation around copyright and IP.

Meanwhile, regulators worldwide are beginning to distinguish between AI and generative AI; a positive trend that acknowledges both the similarities and critical differences in corresponding risk mitigation. Mitigating traditional AI risks often includes a focus on model explainability and repeatability of outputs; this is technically challenging in the context of generative AI, for which risk mitigation shifts toward human oversight and iterative testing like red teaming.

There is also ongoing tension between the frameworks for addressing AI-related risks. More principles-based frameworks emphasize broad ethical guidelines and flexibility, while more rules-based frameworks, like the EU AI Act, establish specific, detailed regulations that must be followed, providing clearer compliance requirements. This distinction highlights the challenge of balancing regulatory oversight with the need for technological advancement.

Leaders need to keep a close eye on changing laws and regulations to avoid unintentional missteps in an environment that is changing in real time.

Underwriting implications

For any organization using generative AI or developing generative AI models, key insurance considerations include: How are we using it, and does our existing coverage potentially leave us exposed? For example, do we need to change limits and retentions on some lines, while leaving others untouched?

From a coverage perspective, many existing insurance products including casualty, media, cyber, and first-party insurance products, among others, offer cover for generative AI-related events. For example, a bodily injury event is still a bodily injury event regardless of whether a new technology was involved. That said, some limited policies and endorsements specific to generative AI are emerging, as well as more AI-specific underwriting questions, which suggest that having robust AI governance in place will become increasingly important, in the US and globally.

On the claims side, there have been few generative AI-specific claims to date. That said, some isolated incidents have gained attention, including one involving deepfake technology on a video call used to defraud a multinational firm of US$25 million.

Avoiding silent AI

Since January 2020, the proliferation of cyber exclusions in insurance has created significant protection gaps — known as silent cyber — by excluding some losses triggered by technological events. To avoid a similar situation with silent AI, organizations should recognize that generative AI can amplify risks and introduce new complexities within existing insurance lines.

It is important to confirm that technology triggers do not remove coverage intended by the policy, like removing IP coverage under an IP policy simply because AI or generative AI is a part of the causal link to loss. Evolving existing insurance contracts before creating new ones — and avoiding complexity in language — can help insureds and insurers align on coverage clarity and contract certainty.

Three categories of controls

To begin mitigating the risks associated with generative AI, organizations need to think comprehensively about how they develop, implement, and use this technology. The following three categories offer a helpful starting point for implementing controls:

  • Technical: The design, implementation, and operational aspects of generative AI technologies, including development standards, security measures, monitoring and evaluation, and data governance.
  • Process: Overarching governance framework and processes that clarify who is accountable for what and when. This includes acceptable use policies, risk assessment protocols, compliance and regulatory processes, as well as documentation and reporting.
  • People: The human factors related to successful integration of generative AI technologies, including stakeholder engagement, feedback mechanisms, as well as training and education around how to use — and how not to use — generative AI tools for work purposes.

By implementing controls across these three categories, organizations can more comprehensively mitigate and manage the risks associated with generative AI, ultimately leading to more effective and sustainable use of the technology to achieve business outcomes.

Live webcast Q&A

  • Do all large language models use copyrighted training data as part of their foundation? How do you expect generative AI to be affected if the use of copyrighted training data becomes illegal?

There is considerable regulatory debate surrounding the use of copyrighted training data in generative AI. In theory, it is possible to develop generative AI models without using copyrighted data, or by obtaining the necessary licenses. If the use of copyrighted training data without appropriate licenses were to be deemed illegal, we could see various fines and penalties imposed on those who violate these regulations. However, it's important to note that this technology is here to stay, and these types of questions are ones we should be closely monitoring as laws and regulations may change over time.

  • Training generative AI models requires significant amounts of electricity and water resources to cool data centers, which can place a burden on local utilities and communities. How should directors and officers weigh the risks as we observe a rise in litigation against companies for not adequately addressing climate-related concerns?

There are numerous climate-related risks that accompany generative AI, as there are with many new technologies. Senior leaders should be providing appropriate oversight by actively discussing these issues and integrating them into their decision-making process. This will not only help mitigate potential management liability risks but also align the organization with growing expectations for corporate responsibility in addressing climate concerns. Generative AI is here to stay, as are the environmental risks associated with it, so the sooner these discussions happen, the better.

Three opportunities to mitigate generative AI risks

As generative AI continues to present new opportunities for organizations, leaders must be attentive to its accompanying risks. Although there is still much to learn about generative AI, following are three steps organizations can take today to strengthen their risk mitigation efforts.

  1. Understand the difference between generative AI and other forms of AI. Organizations should consider educating themselves on the distinctions between generative and other forms of AI to better understand how they can be used effectively to achieve business outcomes. Additionally, senior leaders and risk managers should carefully consider how risk mitigation efforts should look depending on what kind of technology is being used and for what purpose.
  2. Develop centralized, multi-stakeholder generative AI governance frameworks. Creating and implementing robust AI governance requires that all relevant stakeholders across technical, human resources, legal/compliance, and business functions are aligned on ethical standards and operational guidelines. A cross-functional, collaborative approach fosters accountability and transparency in AI deployment, from senior leaders down to every employee using the technology.
  3. Evaluate the impact of evolving tech, insurance, and legal environments. Organizations should remain attentive to the shifting technological, insurance, and regulatory environments that will inform the development, implementation, and use of generative AI. Continuous, proactive evaluation is necessary given the evolving landscape.

Speaker

Jaymin Kim

Jaymin Kim

Managing Director, Emerging Technologies, Global Cyber Insurance Center

  • United States

Speak with a Marsh representative

To learn more about the risks and opportunities associated with generative AI, please fill out the form below.