Skip to main content

Article

AI, ChatGPT, and the professional services firm – an enterprise risk?

This article considers the risk from these new technologies from various perspectives, and offers some thinking about how to adapt.
digital cyberspace with particles

A recent survey revealed around 50% of Australians are aware of ChatGPT and almost a quarter of the population has used it. Of those who have used ChatGPT, roughly half have used it for work purposes. The growing interest and  embracement of the latest generation of AI technology across various businesses and industries have seen Marsh’s professional services firm clients raising queries with us about ChatGPT, an artificial intelligence (AI) powered language model, as well as AI tools in general. This article considers the risk from this new technology from various perspectives, and offers some thinking about how to adapt.

Clients of professional services firms as users of free AI

Free public use of ChatGPT-type technology potentially challenges aspects of the traditional knowledge and experience advantage professionals rely upon to generate profit. The possibility of a lay person, with free AI support, undertaking certain tasks without professional support may further threaten business models. This is a strategic risk , which will develop uniquely for different client sizes and by service lines. The majority of material shared online is probably categorised as ‘know what’ – factual information and opinion on what the processes and rules are – rather than ‘know how’, which is asking how to actually execute specific tasks. However, despite media reports indicating the latest generation AI’s apparent ability to pass professional exams, there is evidence that wrong answers are also often created. Given the speed of progress, it is probable that AI solutions will be reliable soon. Consequently, firms should assess the impact on services they offer and the likely increased frequency of limited retainers that may in turn require additional controls to manage risk.

Professional services firms as users of free AI

It may prove difficult for professional services firms to control colleagues’ use of external AI services. However, users need to be mindful that questions asked of AI systems could potentially reveal privileged or confidential information. Additionally, using these services for work-related tasks may also breach intellectual property rights. Even using such services to trawl and identify if others have made similar enquiries, might reveal an interest. Enquiry timing might reveal strategies, plans and concerns, and those holding data stemming from enquiries may use it for their own purposes. Furthermore, it could possibly be obtained by bad actors, who might seek to exploit it or find ways to poison the data pool to manipulate results. Firms may need to modify existing employment procedures to explicitly clarify that such usage is not permitted, if that is the position chosen.

Changing service delivery by professional services firms

ChatGPT may give a lay user confidence to take steps themselves that a professional might otherwise undertake. However, current reliability of results is questionable. Many professional services firms deploy AI to support clients with more interactive FAQs and some basic services. For larger firms, pressure from clients for efficiency and practitioner’s own use of AI (as part of internal support), have existed for some time. For example, accounting firms use AI to review documents such as board minutes and leases.

The professional service provider still has significant value to offer, despite AI’s free information and expertise. Firms can provide reliable operating processes and deal with situations that are more bespoke, and potentially use AI products for more standard situations. Indeed, many firms are investing heavily in database access to enable clients to self-serve with advice. As an example (although not necessarily using AI), one law firm provides access to data on the average settlement size for different types of employment allegations. Rather than seeking individual advice, the user enters the location and allegations to receive an idea of average reported settlements.

Are risks of providing these services covered by professional indemnity insurance?

For law firms regulated by the various state and territory bodies in Australia [1], we would expect professional indemnity claims arising out of reliance on such advice to fall to the wide terms of cover provided by the professional indemnity policies issued or approved by various state and territory law societies. More generally, it would be prudent for professional services firms to consult with their broker and regulator. This can prevent potential surprises about whether claims are covered and if the service was compliant. Requirements of professional regulators will also need to be heeded. Please also see Australia’s 8 Artificial Intelligence (AI) Ethics Principles to understand the Australian Government’s principles to design and ensure AI is safe, secure and reliable.

[1] Law Society of the Australian Capital Territory; NSW Legal Services Commissioner​; Law Society Northern Territory; Legal Services Commission Queensland; South Australian Legal Professional Conduct Commissioner; Legal Profession Board of Tasmania; Victorian Legal Services Commissioner; Legal Practice Board of Western Australia.

Thinking about both sides of the issue – users and providers

Overall, we consider that there are significant risks both as a provider and user of AI services, which ought to be monitored and managed appropriately.

Professional services firms as providers of internal AI services

Firms are developing support systems for colleagues internally. More logical and higher quality search access and solutions to policies, procedures and ‘know-how’ may be extremely useful. However, a key issue is the ongoing effort required to maintain these systems with up to date information. Additionally, it would be unsurprising if some clients seek to limit use of highly sensitive information to a particular work group, albeit this is not a new issue. If security is breached – and internal AI compromised – the usage data or corruption of data and AI based results may also create reputational risks for the firm and clients.

Professional services firms as providers of external AI services

As this is a significant area of risk, it may be useful to treat what is being provided as a product. As a reputational risk – and a novel area – we believe it is worth considering whether use of these tools and creation of products creates fundamental new hazards.

We have opted to use the bow tie risk tool as a lens to consider prevention and mitigation.

Below is an example of the tool we developed for law firm cyber risk:

Most users find it helpful to both define and order the key areas of the diagram: 

  1. Identify the ‘hazard’.
  2. Define the ‘top event’ – how the ‘hazard’ becomes problematic or uncontrollable.
  3. Identify the key ‘threats’ enabling the ‘top event’, and ‘barriers’ that address those ‘threats’.
  4. Consider the ‘consequences’, and position appropriate ‘mitigants’ where possible.

In this case:

  1. Hazard/Top event – does use of AI or creation of AI products create a fundamental change to the source of risk (hazard)? 
  • Use of AI: In our opinion, AI use does not create a new ‘hazard’ for firms, it remains the holding of sensitive data. The ‘top event’ is that control/security of the data asset fails and loses value or publication damages the client’s business.
  • Creating and selling AI products: However, creation and distribution of AI products for clients does create a new ‘hazard’; which in our view requires greater thought.

Product risk is familiar to some professional services firms, particularly IT developers and brokers. As more pure professional services firms may now be offering what are considered products, we suggest implementation of roles, processes and procedures for the design of these products, approval, marketing and maintenance. This enables firms to control the risk, test products and check feedback.

In financial services, governance, assessment and product refreshes were developed in response to market failures and the adoption of recommendations for Misconduct in the Banking, Superannuation and Financial Services Industry Royal Commission and other regulatory inquiries. Professional indemnity insurers of professional services firms have experience of suitability issues arising when homogenised advice is given to large numbers of clients – and the product does not perform as expected.

Firms that are designing and delivering AI products to clients should be cognisant of this risk. Subsequently ensuring there is robust product design and management with ongoing testing at least annually; possibly more often depending on feedback and changes. Providing a governance structure for a product, or putting the product through such a process, is sometimes recognised late in the product development cycle, becoming an unwelcome drag on product launch. However, experience shows that it is a necessary step. 

For large professional services firms, an AI product is unlikely to generate wrong results for tens of thousands of users, making relatively modest claims – as happened in financial services. However, if outputs are wrong, the product could result in identical deficient advice being provided to multiple clients in a short period of time, without much chance of detection.

Reviewing the bow tie model, we can consider what barriers are in place to prevent product failure and how they relate to threats. A common problem can occur when the threats alter and the system does not detect that a barrier has been breached. If the overall system can detect that the barrier has been breached, then according to the model this is an escalation factor. An oversight review of the model should then be triggered along with a potential redesign of the barriers. Without this review, the likelihood of widespread product failure is more significant.

Drawing on previous involvement in the development of online service models, the cost of maintaining and testing product suitability is often significant; potentially eroding the apparent profitability of such approaches. There is also a governance issue regarding who should be responsible for ongoing maintenance and testing, and if they will be independent enough and motivated to undertake the role. These issues are often unpopular with innovative thinkers, who are attracted to creating novelty by leveraging know how.

It may appear attractive and innovative to create opportunity from transforming professional services and offering what have traditionally been bespoke services as a product. However, professional services firms must develop more back office assurance and infrastructure to support delivery of high quality service through such products. Maintenance, design refresh, and testing must be factored into the cost, in order to manage product risk.

Originally published 16 June 2023 by John Kunzler, Victoria Prescott, Marsh UK

 

Meet the authors

Placeholder Image

Craig Claughton

Chairman, Financial and Professional Services, Pacific

Placeholder Image

Ruth  Parker

Managing Principal, Law

Holly Monrad

Holly Monrad

Head of Corporate Cyber - Pacific

This publication is not intended to be taken as advice regarding any individual situation and should not be relied upon as such. The information contained herein is based on sources we believe reliable, but we make no representation or warranty as to its accuracy. Marsh shall have no obligation to update this publication and shall have no liability to you or any other party arising out of this publication or any matter contained herein. Any statements concerning actuarial, tax, accounting, or legal matters are based solely on our experience as insurance brokers and risk consultants and are not to be relied upon as actuarial, accounting, tax, or legal advice, for which you should consult your own professional advisors. Any modelling, analytics, or projections are subject to inherent uncertainty, and any analysis could be materially affected if any underlying assumptions, conditions, information, or factors are inaccurate or incomplete or should change. 
LCPA 23/284

Marsh Pty Ltd (ABN 86 004 651 512, AFSL 238983) (“Marsh”) arrange this insurance and is not the insurer. The Discretionary Trust Arrangement is issued by the Trustee, JLT Group Services Pty Ltd (ABN 26 004 485 214, AFSL 417964) (“JGS”). JGS is part of the Marsh group of companies. Any advice in relation to the Discretionary Trust Arrangement is provided by JLT Risk Solutions Pty Ltd (ABN 69 009 098 864, AFSL 226827) which is a related entity of Marsh. The cover provided by the Discretionary Trust Arrangement is subject to the Trustee’s discretion and/or the relevant policy terms, conditions and exclusions. This website contains general information, does not take into account your individual objectives, financial situation or needs and may not suit your personal circumstances. For full details of the terms, conditions and limitations of the covers and before making any decision about whether to acquire a product, refer to the specific policy wordings and/or Product Disclosure Statements available from JLT Risk Solutions on request. Full information can be found in the JLT Risk Solutions Financial Services Guide.”