AI's potential to drastically enhance the insurance industry is extraordinary and undeniable, with McKinsey estimating AI could add up to $1.1 trillion in annual value for the global insurance industry. But just as AI's potential is beyond question, so is the fact that it poses significant risks if mistakes are made in its use.
AI regulations worldwide are coming into focus as governments, regulators, and organisations try to mitigate these risks for themselves and the citizens and end customers they serve. Against this backdrop, AI governance has emerged as an essential strategic tool for insurance leaders to maximise profits, reduce losses, and gain resilience to the turbulence of an ever-changing industry.
AI has dramatically impacted the insurance industry across various use cases, from models that assist with fraud detection and competitive pricing to customer service chatbots. As AI’s capabilities have developed, its adoption has accelerated, and many insurers now have hundreds of models in production across their businesses. But as AI adoption scales, so do the associated risks and costs.
Operational Inefficiency
To maximise value creation, every AI model must be designed and built specifically for the problem it is intended to solve. This includes considering constraints on data availability, security considerations, integration with existing systems, staff training, and more. Once in production, it’s also imperative that model performance can be understood and assessed outside of data science teams. AI is not an automatic guarantee of greater profits and reduced losses. It can have the opposite effect if insurance leaders don’t know whether their AI delivers the necessary business impact.
If these questions aren’t addressed at the start, insurers will be exposed to the operational inefficiency of models that don’t perform as they need to, from underwriting the wrong policies to premium leakage that hurts the bottom line. In insurance, where margins are fine, competition is fierce, and change is constant, the slightest inefficiency can have a drastic negative impact if left unaddressed.
Scaling Challenges
At the same time, insurers must implement further strategies and processes for managing and maintaining all their models across their business. Many insurers will have hundreds of models in production, which is only likely to increase with AI’s growing capabilities. These models will perform a wide range of functions, with numerous models in pricing, fraud detection, underwriting, etc. Ensuring these models perform not just in their respective functions but are aligned with wider business goals and strategies is essential. Furthermore, 91% of models degrade in performance within the first year, so monitoring and governing these models is a fundamental necessity. The more models an insurer uses, the more difficult this challenge becomes.
It also means data science teams must spend more time and effort retraining and maintaining existing models rather than helping insurers scale their AI with new ones. Repetitive tasks like model retraining are not what data scientists want to be doing, which goes some way to explain why the average data scientist will only remain in their current job for 1.7 years before leaving. Retaining data science talent is a huge challenge for insurers, and finding alternative approaches to fully in-house model building and management will go a long way to solving it.
Regulatory Noncompliance
In recent months, a raft of new regulations have either been announced or signed into law, from the EU AI Act, which divides AI into risk categories, to the FCA-enforced Consumer Duty, which will come fully into effect in July. These regulations will have a material impact on how we use AI, and insurers will need to constantly assess their AI's impact on their business and customers. Failure to do so will lead to regulatory scrutiny and financial penalties for those not compliant. As regulations evolve and AI investment increases, these penalties will become more severe in the coming years. This is on top of the damage to an insurer’s reputation and customer trust when they are seen to break the rules. It’s up to the key decision-makers to avoid these pitfalls and ensure their AI is always aligned with the latest regulations across all relevant jurisdictions.
Discrimination, Bias, and Ethics
Insurance customers are often required to give their personal data to access products and services. AI is tailor-made for processing vast quantities of this data, but this introduces the risk that the AI will make decisions or recommendations based on biased data or protected characteristics like religion or ethnicity. AI models can also discriminate against people based on discriminatory characteristics by making inferences based on other information like a customer’s address, even if their creators did not intend for them to analyse those variables.
The legal and regulatory ramifications of failing to address these issues are significant, and the damage they can cause if left unresolved can be catastrophic. Beyond legal and regulatory compliance, there are also ethical considerations to contend with, and AI decisions should align with the company's values and ethical standards.
Liability Issues
At the same time as AI adoption has expanded in insurance, these systems have also become more advanced, able to make more accurate decisions using vast volumes of increasingly granular data. Consequently, determining liability in case of errors or failures has become increasingly difficult. As the regulatory implications of these problems become more defined and compliance becomes more essential, insurance leaders will need to clearly define accountability and ensure safeguards and contingency plans are in place.
Transparency and Explainability
To maintain a competitive edge, insurers need robust, reliable, and agile AI models. But if these models aren’t also transparent and explainable, insurers can’t assess the model’s inner workings and understand how and why it is making its recommendations. This can significantly undermine trust among customers and regulators and drastically hamper compliance for decision-making transparency. In insurance, these are fundamental necessities, not optional extras.
Addressing these risks requires a comprehensive governance framework that includes robust policies, continuous monitoring, and an ethical approach to AI implementation. If applied strategically and responsibly, this framework should ensure transparency, accountability, and compliance with all relevant regulations and standards.
1. Implement Guidelines for Ethical AI
Insurance decision-makers must ensure that ethical AI is not merely a marketing buzzword but a strategic tool for insulating against bias and regulatory noncompliance. This means developing comprehensive ethical guidelines governing their use of AI that prioritises fairness, transparency, and accountability. Some resources to consider include the Association of British Insurers (ABI) guide to getting started with responsible AI and our earlier piece on 5 Ways to Manage AI Responsibly in Insurance.
2. Outline Accountability and Risk Ownership
Robust, reliable, and scalable AI adoption requires the involvement of various stakeholders, from data scientists and system integrators to departmental directors and organisational leadership. Furthermore, new regulations like the Consumer Duty clearly place the responsibility for customer outcomes at the feet of the insurer and demand that insurers not only take steps to ensure positive outcomes for their customers but require them to provide clear evidence that they’re doing this. This means an insurer must outline clear lines of accountability for risk ownership within their organisation at the same time as establishing robust data governance and security policies. This will help to ensure customers’ data is used in line with their expectations and desires, as well as data legislation like GDPR.
3. Prioritise Transparency and Explainability
Existing and upcoming regulations have made transparent and explainable AI models essential in customer-centric industries like insurance. They require insurers to be able to view and assess the inner workings of their models and explain their outputs to a range of stakeholders. For example, suppose a customer’s premium changes due to an AI-assisted decision, and they query why this has occurred. In that case, an insurer must provide an explanation that satisfies their expectations and keeps their trust, which is essential to customer retention.
At the same time, transparent and explainable AI models allow insurers to identify discrimination and bias at source and take preventative action before they harm their customers and, through regulatory intervention and sanctions, their business. With the right approach to model transparency and explainability and some AI-assisted techniques that make complex models more explainable, insurers can identify potentially dangerous issues like discrimination and bias for individual models and their entire AI estate.
4. Create Performance Metrics and Conduct Rigorous Testing
Insurers must set clear operational bounds and KPIs to validate model performance and understand how AI impacts their business. They should also benchmark for reliability and consistency and implement feedback loops to facilitate continuous improvement. Individual model monitoring protocols like drift detection should also be considered a fundamental component of AI adoption, as without them, it can be impossible to identify model performance decline until it starts negatively impacting the bottom line. Insurance leaders should also be mindful that achieving all this is incredibly difficult when attempted entirely in-house. There are software solutions that can facilitate proper AI governance, and insurance leaders should consider all the tools at their disposal to make their AI adoption successful.
5. Invest in Human · AI Collaboration
The greater an insurer's investment in AI, the more important it becomes to mitigate the risks associated with the technology. Retaining oversight over the AI decision-making process is essential, especially in a high-stakes sector like insurance that affects the lives of so many. Explainability and transparency are key to this, as they enable AI’s workings to be understood and communicated to various stakeholders without compromising model complexity and accuracy. For example, explainability in insurance pricing enables decisions to be assessed and explained to customers, regulators, data scientists, and insurance leaders so that the insurer is not exposed to unnecessary risk.
Operational Efficiency
A solution that facilitates building, deploying, and maintaining properly governed AI models will accelerate the efficiency of an insurer’s entire AI estate. It means hundreds of models spread across all business functions can be brought together so their performance can be viewed, understood, quantified, and communicated to multiple stakeholders, from the data scientist building them to the leader overseeing the department, and that no model’s value generation is left siloed within its respective deployment environment. Models can be combined to solve new problems and learnings distributed across the portfolio, so models don’t just maintain performance levels but improve over time, increasing value generation and distributing the benefits of AI across the business.
Cost Reduction
Retraining or replacing a model when it inevitably starts to experience performance decline is expensive, and the unpredictable nature of this expense poses a headache for budget holders. Implementing the necessary model explainability, transparency, and measures for continuous and real-time model monitoring means that data science teams can focus on more important tasks. It also means that problems like data drift are identified and rectified before they can negatively impact the bottom line.
Regulatory Compliance
An effective framework for building, managing, maintaining, and scaling AI means the compliance status of an entire suite of models can be viewed and assessed at all times, highlighting potential issues as and when they occur. It also means that insurers can adapt to the inevitable changes in regulation and remain continuously safeguarded against catastrophic consequences.
Competitive Advantage
As a ferociously competitive industry, insurers can’t afford to fall behind the times and allow their competitors to beat them to the benefits of innovation. The right strategies and tools for AI adoption in insurance can deliver the competitive advantage insurers require, with models that maximise profit margins, maintain regulatory compliance, deliver sufficient explainability to keep customers happy, and ultimately provide the foundation to grow their AI portfolio.
The regulatory response to the risks posed by AI has placed significant responsibility on insurance leaders. Just as AI can help maximise profits and reduce losses, it can compromise both those goals when adopted without the right considerations. AI governance is a key part of the solution to this problem. The responsibility for AI strategy falls on key decision-makers who must navigate regulatory requirements, manage risks, build customer trust, protect data privacy, improve operational efficiency, and gain a competitive advantage. With the right strategies for its adoption, AI’s risks can be effectively mitigated whilst its benefits can be maximised. Insurance leaders must implement these strategies to be at the forefront of the AI insurance revolution.
Enjoyed this article? Read our piece on The Consumer Duty and Its Implications for Insurers.