Robust consumer protection standards are needed for the use of AI in the insurance sector | Finance Watch

Robust consumer protection standards are needed for the use of AI in the insurance sector

05 June 2025

Consultation response

In its response to EIOPA’s consultation on Artificial Intelligence Governance and Risk Management, Finance Watch calls for EIOPA to issue robust measures tackling the exclusion, mis-selling, redress and unfair price discrimination risks associated with the use of AI in the insurance sector.

On 12 May, Finance Watch responded to the EIOPA consultation on the Opinion on Artificial Intelligence Governance and Risk Management. The consultation sets out proposed high level guidance based on existing legislation to ensure the safe use of AI in the insurance sector, including measures on risk management, data governance, documentation and recordkeeping, transparency/explainability, and human oversight of AI systems.

Finance Watch welcomes EIOPA’s efforts to ensure that the risks to consumers associated with the deployment of AI in the insurance sector are mitigated. However, Finance Watch highlights the need for measures that go beyond high-level guidance and the need for additional and stronger measures in certain areas.

High-level guidance based on existing sectoral legislation is not sufficient to ensure that consumers are adequately protected from the risks associated with the use of AI systems in the insurance sector

Finance Watch points out that legislative changes to existing sectoral legislation are needed. At the time existing sectoral legislation was approved AI systems did not exist or they were not widely used and therefore this legislation was not drafted with the deployment of AI and its unique risks to consumers in mind. Moreover, given the considerable risks that AI brings, legally non-binding measures alone are not appropriate.

There is a need for rules specifying the kind of data that can be used and collected by AI assisted decision-making tools and for the purposes of training AI models

The use of the wrong types of data by AI systems performing risk assessments and pricing of insurance bears risks of exclusion, mis-selling and unfair discriminatory pricing practices. In addition, AI systems enable highly granular (personalized) risk assessments which can lead to huge swathes of vulnerable consumers becoming uninsurable. Therefore, measures specifying the kind of data that can be used and collected by AI assisted decision-making tools and how data may be used to avoid excessive granularity that undermines the “risk sharing” principle of insurance are needed.

A right for consumers to request human intervention to review decisions made by AI systems should be introduced

Human reviews of automated decisions generated by AI systems can be an important mitigant against inaccurate and biased decisions made by AI systems that can lead to mis-selling and/or financial exclusion. Moreover, the disclosures provided to consumers should also include information about the categories of data used by the AI system to make decisions that have a material impact on consumers. Having this information is key to allow consumers to make informed decisions, including the decision whether there is a need to contest the decision with the insurance provider.

Read the full response