Skip to main content

Article

Debunking Generative AI myth #1: Who owns GenAI risks

Robust cybersecurity controls are necessary, but not sufficient to address generative AI risks.

Organisations of all sizes, across virtually every industry, are exploring how to optimise generative AI technology to achieve business objectives, including realising operational efficiencies, increasing client satisfaction, and developing new products and services. To sustainably capitalise on generative AI’s potential upside, companies must be aware of and prepare for potential downsides.

At Marsh, we help organisations across industries understand, measure, manage, and respond to generative AI risks. In doing so, we have helped risk leaders and senior executives address three common myths.

In this article, the first in a three-part mini-series, we’ll explore:

Myth 1: Generative AI is an emerging technology; therefore, corresponding risks should be owned by the Chief Information Security Officer’s (CISO) office.

Robust cybersecurity controls are necessary, but not sufficient to address generative AI risks. Organisations should look at three sets of controls:

  • Technical controls refer to cybersecurity controls, some of which may need to be updated or replaced to effectively mitigate generative AI risks.
  • Process controls refer to governance policies and procedures, including board-level governance and senior management oversight over generative AI use cases and risk management.
  • People controls refer to education. Employees who access generative AI tools need to be aware of how to use them responsibly, and to learn not to fall victim to AI-enabled social engineering attacks. In some cases, specialised reskilling and upskilling may be necessary. 

All three sets of controls—not only technical—are needed and require coordination and iteration to build resilience against generative AI risks. No matter how robust an organisation’s technical controls are, they will not prevent a colleague from unwittingly, perhaps due to a lack of education, entering proprietary or sensitive company data into a publicly available generative AI model, whether they are working on site or at home.

Strong technical controls will not matter if there isn’t a governance structure in place at the board and senior management level. This is required to define the organisation’s objectives and its acceptable use policy in deploying generative AI. For example, are colleagues permitted to access certain generative AI tools–such as the recently released DeepSeek–in conducting work affairs?

Such governance frameworks require centralised, multi-stakeholder leadership engagement spanning not only the CISO, but also the leaders of HR, legal/compliance, relevant businesses, and risk management (see Table 1). The risk leader should play a critical role in helping functional leaders understand how their perspectives fit into the broader picture. For example, they can help the CISO’s office coordinate with HR to ensure that appropriate education is provided to colleagues/teams that will use new generative AI tools as they are deployed across the enterprise.

Table 1: Generative AI governance requires centralised, multi-stakeholder leadership engagement*

Leadership functions Role
Board and/or board-designated subcommittee
  • Oversees generative AI strategy alignment with enterprise goals.
  • Oversees process controls, complemented by technical and people controls.
CISO/CTO
  • Oversees technical controls, complemented by process and people controls.
  • Oversees vendor procurement/due diligence process from a technical perspective.
Chief privacy officer/chief data officer
  • Oversees data governance as pertains to generative AI adoption and use.
CHRO
  • Oversees people controls, complemented by process and technical controls.
Legal/compliance
  • Navigates the emerging legal and regulatory environment for the organisation’s adoption and use of generative AI tools as well as for vendor procurement.
Business leads
  • Directs the responsible adoption/use of generative AI tools among colleagues.
  • Provides iterative feedback loop regarding how colleagues are using generative AI tools and what is working well vs. what is not.
Risk leader
  • Guides generative AI governance stakeholders on how their respective roles need to complement one another in achieving the organisation’s business and risk objectives.
  • Oversees vendor procurement/due diligence process from a liability perspective.

*Note: Not every organisation will have all of these functions

Like most risks in today’s complex business environment, management of generative AI exposures should not be relegated to a single department. Developing and maintaining generative AI risk resilience requires cross-enterprise planning and vigilance.

To learn more about how a Marsh specialist can help your company navigate generative AI and its risks and opportunities, please contact us here.

Next up: Myth #2 explores generative AI’s pertinent risks

Our people

Jaymin Kim

Jaymin Kim

Managing Director, Emerging Technologies, Global Cyber Insurance Center

  • United States

Greg Eskins

Greg Eskins

Global Cyber Product Leader and Head, Global Cyber Insurance Center

  • United States

Related insights