6 min read

The Impact of UK and EU AI Regulations on Insurers

The Impact of UK and EU AI Regulations on Insurers
The Impact of UK and EU AI Regulations on Insurers
12:43

AI adoption has accelerated in insurance in recent years, with 77% of insurers indicating they are at some stage of adopting AI in their value chain. At the same time, AI regulations worldwide are becoming more defined, and these regulations will have a material impact on how insurers procure, manage and maintain AI across their business units. In this piece, we will outline the UK and EU’s regulatory roadmaps, as well as the questions insurers need to ask themselves to implement and maintain compliance. 

UK AI Regulations

One of the difficulties with regulating AI is that the technology is advancing so rapidly that legislation risks quickly becoming obsolete. For example, recent advances in Generative AI applications like Chat GPT have posed a significant challenge for regulators in drafting new laws. To counter this, the UK Government has proposed a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles: 

  1. Safety, security and robustness 
  2. Appropriate transparency and explainability 
  3. Fairness
  4. Accountability and governance 
  5. Contestability and redress.

The idea behind the UK’s “pro-innovation” framework is to empower existing regulators to apply or repurpose existing regulations or provide tailored and context-specific guidelines for how AI is used in their sector. The UK Government argues that it is too soon to turn this framework into concrete regulations and that AI’s risks and challenges must be better understood first. Nevertheless, individual regulators have already begun enacting AI regulations within their purview.

What UK AI Regulations Mean for Insurers

The Information Commissioner's Office (ICO) already enforces the UK GDPR, which governs how organisations, including insurers, collect, store, transfer, and use people’s data. The ICO has also developed their Guidance on the AI auditing framework, which “provides a solid methodology to audit AI applications and ensure they process personal data fairly”. Data is a fundamental component of any AI system, so insurers must ensure they know how their customers’ data is being used, that their data processes comply with GDPR and that their AI systems are secure and resilient to cybersecurity threats. If not, organisations leave themselves exposed to significant risk, as was the case for Swedish insurer Trygg-Hansa, who received a €3 million fine for a breach of GDPR in 2023

On July 31st this year, the Consumer Duty became fully in effect and will be enforced by the Financial Conduct Authority (FCA). As we’ve highlighted in a previous blog, insurers will be placed in clear responsibility to deliver positive outcomes for their customers and be ready and able to demonstrate to the regulator that this is what they’re doing. This is one of the reasons why AI explainability, especially in functions like pricing, has become such a fundamental necessity for any insurer with plans to adopt and scale AI. It is also why technical transparency and explainability measures are such an integral part of any practical strategy for scaling AI within insurance.

The FCA has announced that it will work with the Financial Ombudsman Service (FOS) to use customer complaints as a key indicator of compliance with the Consumer Duty. Insurers must also ensure that their AI-assisted decisions and explanations for them meet their customers' expectations. Concerningly, however, analysis by Which? found distress and inconvenience were recorded in 64% of all complaints upheld by the FOS in 2023, up from the 53% recorded in 2019. With the Consumer Duty now officially part of the regulatory landscape, the need for insurers to address this concerning trend and start enacting strategies for better customer outcomes has never been more pressing.

UK AI Regulations are still in their early stages. Still, rapid developments are to be expected, and there are many resources available that can help insurers build and implement strategies for the future. The Association of British Insurers (ABI) has released a guide to getting started with responsible AI  that complements the Government’s guidance on understanding AI ethics and safety. We have also outlined practical measures that insurers can take to manage AI responsibly and in a way that balances the innovation and risk inherent in AI. What connects all guidance and regulations for UK firms is the need to make transparency, explainability, and ethical responsibility fundamental components of any AI strategy. Doing so will be critical in balancing business benefits with regulatory compliance and ensure continuous success.

Timeline of the UK’s Pro-innovation Approach to AI

The EU AI Act

On 21st May 2024, the EU AI Act was signed into law, becoming what the EU describes as the world’s first comprehensive AI regulation. The Act categorises different AI systems according to risk.

Unacceptable risk – These AI applications will be banned within the EU. Examples include social scoring and monitoring of people and AI, which manipulates human behaviour or exploits people’s vulnerabilities.

High risk – High-risk AI systems are subject to strict conformity assessment and monitoring. Examples include AI that controls access to financial services, critical infrastructure, or employment and AI systems that profile individuals, such as those that process personal data to assess various aspects of a person’s life, such as health, economic situation, interests, or behaviour. 

Limited risk –These are subject to specific transparency obligations, for example, users should be aware that they are interacting with AI and AI-generated content should also be identifiable. Chatbots would come under this category.

Minimal risk – Minimal risk AI is unregulated, such as spam filters and AI-enabled video games.

Several requirements also apply to general-purpose AI (GPAI) systems, which can serve various purposes for direct use and integration into other AI systems. These requirements include providing information and documentation to downstream providers, establishing a policy to respect copyright, and publishing a summary of the content used to train the model. 

The Act will be implemented in stages after entry into force:

  • Six months for prohibited AI systems.
  • 12 months for general-purpose AI.
  • 24 months for high-risk AI systems.

Timeline of the EU AI Act - June 18 - 2024

The Implications of The EU AI Act for Insurers

The EU AI Act applies to both AI providers and deployers. For providers, it applies to those commercialising AI systems in the EU market, irrespective of their place of establishment or when the output produced by their systems is used in the EU. For deployers, it covers those with their place of establishment in the EU or when the output of their system is used in the EU. This means any insurer whose operations are based in or affect customers in the EU will fall under the jurisdiction of the AI Act.

The AI Act takes a health and safety focus on the implications of AI that is complemented by fundamental rights protection. Regarding the risk levels outlined in the legislation, AI systems used for risk assessment and insurance pricing are considered high-risk AI systems. This means they will be subject to strict conformity and model monitoring requirements as these systems can significantly impact a person’s life and health and lead to financial exclusion and discrimination. 

Model monitoring becomes more difficult as the number of models an insurer has in production scales from a few to dozens to hundreds. Monitoring models in-house requires the dedication of significant resources, time, and effort, all of which could be better spent building new models and discovering new use cases. Insurers should explore ways to minimise the cost of model monitoring by working with partners who can help take the burden off their shoulders while ensuring compliance efforts don’t come at the expense of operational efficiency.

This places even greater importance on insurers’ ability to evaluate and assess model performance, including how models make inferences on customer data that could be susceptible to discrimination. Pricing models are typically incredibly complex, and insurers can have multiple models contributing to one pricing decision. These factors can be detrimental to model explainability, but just as is the case under existing UK regulation, under the EU Act, this explainability will be necessary for insurers to gain assurance that their pricing models are compliant. Fortunately, there are several AI-assisted techniques to improve the explainability of complex models, and insurers must understand these techniques as integral strategic assets rather than optional add-ons. 

As for other AI applications in insurance, like claims management and counter-fraud, The Act says that, given that the use of AI in these use cases is already quite extensive, “supervisors need to assess the extent to which existing rules are sufficient and where additional guidance may be needed for specific use cases. This would consider considerations such as proportionality, fairness, explainability and accountability.”  

Building your AI compliance strategy 

These AI regulations have significant ramifications for any insurer operating in the UK and/or the EU, requiring specific strategies, tools, and processes to ensure and maintain compliance. Building your own AI compliance strategy is critical to integrating AI into the fabric of your business in a strategic, responsible and transparent way.  

The strategy you develop will differ based on your business drivers and vision, your organisation's size and the operating sector.  It should be comprehensive and able to answer questions about your AI, such as:

What is your AI? Ask probing questions to discover how AI is being tested, used, developed, and procured in your org today and in the roadmap for the future. If you don’t know this information, how can you find it?

How much value does AI bring to your organisation? Ask questions to determine the business use cases for how AI is deployed throughout your organisation and what risk assessments have been carried out. If you don’t have an AI risk matrix for deployed AI, how might you build one?

How do you govern your AI? Find out who is in charge of compliance and AI risk management across your organisation. Understand how projects are signed off and how AI systems are procured from external vendors. If you don’t have a coherent governance structure, who is best placed to start creating one?

How do you manage your AI? As regulatory roadmaps and sector guidelines, be sure to ask questions to ensure your compliance team stays on top of things. Understand the specifics of how the EU AI Act or the UK’s AI pro-innovation approach affects your specific AI and how you will keep reviewing and monitoring these in the future.

Do you understand your AI? Make sure you know who is responsible for ensuring your AI operates responsibly, what the backup plan would be in case of failure, and how AI might negatively impact certain stakeholders. Is the appropriate human oversight in place? 

A Customer-Centric Approach to AI Adoption in Insurance

AI compliance in heavily regulated industries is not a tick-box exercise but an ongoing process that requires trust, understanding, accountability and commitment across numerous departments in an organisation.  Every insurer must outline risk accountability within their organisation so everyone knows their role in ensuring their AI adds value rather than causing harm. They must implement technical measures for transparency and explainability that enable them to view, assess, understand, and communicate the workings and outputs of their entire AI estate, not just individual models. Finally, they must understand that regulatory compliance will require them to provide clear evidence that they are taking steps to ensure their AI is helping deliver positive outcomes for their customers while protecting them from risk, discrimination, and harm. We recommend establishing partnerships with trusted AI experts who can lend insight, technology, and tools to ensure the AI you build today will continue to perform safely and reliably for years to come. 

AI and Sonar: Seeing with Sound

AI and Sonar: Seeing with Sound

Despite the ocean covering over 70% of the Earth’s surface, we have only physically explored 5% of it. Part of the reason why so much of it is left...

Read More
Fighting Fraud in Typhoon Season

Fighting Fraud in Typhoon Season

Natural disasters create an environment for scam artists to exploit desperate communities. AI can help insurers detect and counteract these nefarious...

Read More
Designing Systems with AI in the Loop

Designing Systems with AI in the Loop

An AI-in-the-loop approach can help mitigate some of AI’s inherent risks, but to deliver real impact in Defence & National Security, AI must also be...

Read More