
Jaymin Kim
Managing Director, Emerging Technologies, Global Cyber Insurance Center
-
United States
Organisations of all sizes, across virtually every industry, are exploring how to optimise generative AI technology to achieve business objectives, including realising operational efficiencies, increasing client satisfaction, and developing new products and services. To sustainably capitalise on generative AI’s potential upside, companies must be aware of and prepare for potential downsides.
At Marsh, we help organisations across industries understand, measure, manage, and respond to generative AI risks. In doing so, we have helped risk leaders and senior executives address three common myths.
In this article, the first in a three-part mini-series, we’ll explore:
Robust cybersecurity controls are necessary, but not sufficient to address generative AI risks. Organisations should look at three sets of controls:
All three sets of controls—not only technical—are needed and require coordination and iteration to build resilience against generative AI risks. No matter how robust an organisation’s technical controls are, they will not prevent a colleague from unwittingly, perhaps due to a lack of education, entering proprietary or sensitive company data into a publicly available generative AI model, whether they are working on site or at home.
Strong technical controls will not matter if there isn’t a governance structure in place at the board and senior management level. This is required to define the organisation’s objectives and its acceptable use policy in deploying generative AI. For example, are colleagues permitted to access certain generative AI tools–such as the recently released DeepSeek–in conducting work affairs?
Such governance frameworks require centralised, multi-stakeholder leadership engagement spanning not only the CISO, but also the leaders of HR, legal/compliance, relevant businesses, and risk management (see Table 1). The risk leader should play a critical role in helping functional leaders understand how their perspectives fit into the broader picture. For example, they can help the CISO’s office coordinate with HR to ensure that appropriate education is provided to colleagues/teams that will use new generative AI tools as they are deployed across the enterprise.
Leadership functions | Role |
---|---|
Board and/or board-designated subcommittee |
|
CISO/CTO |
|
Chief privacy officer/chief data officer |
|
CHRO |
|
Legal/compliance |
|
Business leads |
|
Risk leader |
|
*Note: Not every organisation will have all of these functions
Like most risks in today’s complex business environment, management of generative AI exposures should not be relegated to a single department. Developing and maintaining generative AI risk resilience requires cross-enterprise planning and vigilance.
To learn more about how a Marsh specialist can help your company navigate generative AI and its risks and opportunities, please contact us here.
Managing Director, Emerging Technologies, Global Cyber Insurance Center
United States
Global Cyber Product Leader and Head, Global Cyber Insurance Center
United States
Webcast,Featured insight
05/05/2025