By Greg Benefield ,
National Food, Beverage & Ag Segment Leader
12/10/2025 · 6 minute read
From persisting economic pressures, including tariffs and inflation, to supply chain disruptions, and rising operational costs, retailers continue to face a complex and challenging risk landscape. In an environment where organizations are seeking to control costs in an attempt to minimize financial burdens, maximizing operational efficiencies and managing claims expenses have become critical priorities for maintaining profitability and financial resilience.
Retailers — and their insurance carriers, third-party administrators (TPAs), and broker partners — are increasingly turning to artificial intelligence — including generative AI — tools to achieve operational efficiencies and manage claims costs more effectively. During Marsh’s recent North American Retail Roundtable, industry leaders shared how they are exploring AI to enhance safety processes, flag potentially problematic claims, and optimize their overall risk management strategies.
But while AI offers promising returns on investment, the industry faces several hurdles that can limit its impact or even introduce a new set of risks.
With lean risk teams and growing pressure to reduce expenses, many retailers are adopting AI-driven solutions to improve productivity and efficiency. Common applications include:
AI-powered cameras can monitor store floors and warehouses in real time, detecting potential safety challenges before incidents occur. For example, when a customer or employee approaches heavy machinery, such as a forklift, the system can alert store managers so that they can intervene as needed. Near misses can help retailers identify safety and general liability claims.
Aside from helping to identify challenges and develop targeted training to reduce both workplace injuries, AI can analyze large volumes of claims data and generate concise summaries, enabling risk managers to quickly identify those that may require immediate attention or may prove difficult to close. For example, AI can rapidly summarize casualty claims, flagging those with the potential for high settlement costs, such as ones involving serious injuries or legal complexities. This insight allows risk teams to focus resources where they matter most.
AI’s ability to quickly crunch historical claims data can help uncover patterns that merit closer attention. For example, a large retailer may identify an increase in slip-and-fall incidents in stores in a particular region that, on further investigation, appear to be linked to a new floor cleaning solution. This information allows retailers to take proactive action to address a potential cause of claims.
AI tools can automatically compile medical reports, witness statements, and incident photos into comprehensive case files that can be used by both the retailer and a TPA to accelerate claims resolution.
Despite the multiple benefits of AI, and its widespread use, many retailers have limited AI expertise within their risk teams. This is especially true among retailers with small, stretched risk teams lacking technical knowledge or coding skills. This gap makes it difficult to understand and critically evaluate AI outputs or spot algorithmic biases, which can expose retailers to legal and reputational risks. For instance, an unintentional bias in an AI hiring tool might screen out a certain group of candidates, potentially leading to employment practices liability claims.
Another common challenge is limited visibility into how AI is used across the organization. Risk managers often don’t know where and how different tools — from generative AI on personal devices to vendor-driven claims analyses — are being used, making governance and risk oversight difficult. Not only can this blind spot lead to missed opportunities for early claim intervention and cost control, but it can also increase the likelihood of costly errors or regulatory exposure.
There is also the danger of over-reliance on AI models. While many AI systems excel at processing data, they lack human judgment and contextual understanding. When risk managers place too much trust in AI-generated recommendations, they may overlook complex claims that require expert evaluation to escalate unchecked. For example, an AI system might misclassify a claim as low risk based on certain patterns, whereas an experienced claims professional may have identified underlying complexities that warrant closer attention.
As retailers seek opportunities to streamline processes and reduce costs, including by using AI to reduce and manage claims, it is important not to overlook the potential risks that AI may bring. Retail leaders should consider a number of actions to help them mitigate these potential challenges, including:
Take inventory of all AI tools in use, understand their purposes, and identify who is responsible for each. This visibility allows for better governance, early risk detection, and better risk management practices as AI is used to more effectively manage claims.
It is important to acknowledge that generative AI is widely accessible; even if blocked on company devices, employees may use it on personal phones, potentially sharing confidential company data. Establish clear guidelines on acceptable use, data handling, and employee conduct and take steps to make sure these policies are widely communicated and properly enforced to mitigate the risk of AI-related claims.
Equip risk teams and employees with information about AI’s capabilities, limitations, and ethical considerations to help them use AI tools effectively and responsibly. This is especially important when implementing new AI tools. Considering the rapid technological advancements, regular training helps build confidence and responsible use.
The human-in-the-loop argument should not be underestimated. AI should augment, rather than replace, human judgment. Define clear roles and review processes, especially for high-stakes claims, to determine whether decisions are contextually appropriate and fair. For instance, a retailer may establish a policy stating that a claims manager reviews AI-flagged cases weekly to validate risk assessments and reviews a sample of other cases to identify any potentially problematic cases that were missed.
Before adopting new AI solutions to manage claims, ask vendors about their algorithms, training data, and bias mitigation strategies. Consider including vendor accountability and liability provisions in contracts to protect your organization.
The evolving regulatory landscape around AI and privacy data adds another layer of complexity, with retailers having to navigate emerging laws and compliance requirements that may vary by jurisdiction and may also lag behind technological advances. Monitor evolving AI regulations and adjust policies accordingly to remain compliant and reduce legal exposure.
Periodic reviews of AI tools can help detect biases, inaccuracies, and operational issues early, allowing AI usage to remain effective. Further, senior leaders may identify new opportunities to manage claims-related risks.
As retailers increasingly embed AI tools into their operational processes to increase operational efficiencies, reduce claims burdens, and safeguard their financial health, it remains critical to consider the potential risks that these technologies can give rise to. By considering and implementing best practices that can mitigate against such risks, organizations can capitalize on the opportunities presented by advanced technologies.
Article,Featured insight
05/16/2025