Skip to main content

Article

AI’s cyber challenges and opportunities uncovered

For many organisations, artificial intelligence isn’t just an option; it is imperative for helping companies stay ahead of the game.
Abstract digital connection dots and lines

For many organisations, AI isn’t just an option; it is imperative for helping businesses stay ahead of the game.

However, while AI models have profoundly impacted how businesses operate, there are potential risks to consider. 

A team from Marsh came together at the recent Future Unlocked event to address the challenges and discuss how businesses can build cyber resilience. 

Types of AI and limitations

Skip to 0:04:35 in the recording

Artificial Intelligence is the ability of computers to simulate human intelligence. Eric Alter, Marsh’s Corporate Risk and Cyber Engagement Leader, explained the benefits and limitations of AI technologies. He also outlined the different uses of AI, namely the following: 

  • Narrow AI performs specific tasks in a limited domain. 
  • General AI carries out tasks humans can perform. 
  • Generative AI employs primarily deep learning models to generate new content across various domains.

Key takeaways

1- The four primary types of AI include:

  • Reactive—such as Deep Blue and Netflix recommendations
  • Limited memory—the most widely used, learns to make predictions and perform complex tasks.
  • Theory of mind—machines acquire decision-making qualities similar to humans.
  • Self-aware—with decision-making capabilities and human-level consciousness – this may never appear.

2- Generative AI (GenAI) focuses on creating original content, while predictive focuses on future outcomes based on data.

3- Used across many industries, including financial services for fraud detection, healthcare to answer questions and diagnose, such as with mammograms, and entertainment.

4- Limitations include poor calculations, so it can mislead. Users must ensure AI is used safely and reliably; it’s only a threat if misused.

Understanding risks

Skip to 0:23:30 in the recording

Businesses must pay more attention to the use of AI, and most already have the tools to manage the risks. 

James Crask, who leads the Strategic Risk Consulting team for Marsh in the UK, discussed five AI risks:

  1. Algorithmic biases and discrimination.
  2. Transparency of decision-making.
  3. Data privacy and security.
  4. Legal and regularity compliance.
  5. Ethical and social implications. 

Key takeaways

  1. Don’t rely solely on your technologists to manage AI risks. 
  2. Treat AI dangers like any other risk and consider the financial, legal, or reputational harm to your organisation. 
  3. Make sure staff are trained to minimise human error and safeguard your organisation.  
  4. Government agencies and regulators have increased their focus with guidelines or new AI regulations. Businesses, however, can’t rely solely on regulation. 

AI and ransomware

Skip to 0:38:19 in the recording

There has been an uptick in sophisticated attacks, such as voice AI, to impersonate CEOs and gain access to systems or enhanced phishing emails.

Traditional controls are the most effective defence from a ransomware attack, which can affect any business.

Amy Mason, a managing consultant who works in Marsh’s Crisis and Resilience team, discussed the changing risks and practical options.

Key takeaways

  1. Ransomware hackers increasingly use AI for financial gain, with Microsoft saying the number of attacks involving data exfiltration has doubled since November 2022.
  2. Predictive AI could provide a solution by identifying scams and anomalies. AI-led cyber detection and protection tools will emerge to identify activity and scan for vulnerabilities. 
  3. Get the basics right. Ensure the right specialists are on hand, have excellent backups, and offer effective phishing training for colleagues.

AI Risks and Insurance 

Skip to 00:57:30 in the recording

As AI continues to grow, it raises significant insurance-related questions. Will there be a specific market for AI insurance, or should this be included within existing cyber policies?   

Joe Latham, Marsh’s UK Cyber, Media and Technology Practice Leader, discussed the positive advancements and significant challenges for the insurance industry. 

Key takeaways

  1. Generative AI could enhance risk assessment and underwriting practices and detect fraudulent behaviour to minimise losses.
  2. Negatives include ethical concerns and bias, which could result in discriminatory practices when evaluating cyber risks or determining liability. 
  3. Automation facilitated by GenAI tools can lead to faster and more efficient claims processing, improving customer satisfaction.
  4. Organisations should think about upholding privacy standards. How do you share personally identifiable information (PII) and handle sensitive information.

Things to consider when using AI

Skip to 1.10:00 in the recording 

Use common sense to use AI safely and effectively. Remember:

  1. Generative AI is prone to human error.
  2. Many risks are familiar, but new risks may arise.
  3. AI laws and regulations are evolving, impacting insurance.

Skip to 01:15:30 in the recording to watch our panel’s take on the audience’s questions.