Business News Technology

The Benefits and Ethical Considerations of Insurance and Artificial Intelligence

Jean Rea, Partner at KPMG Ireland, looks at the growing role of Artificial Intelligence (AI) in the Insurance industry, and what the ethical considerations insurers need to examine when exploring its use.

The insurance industry has a deep heritage in data analytics where data has always been collected and processed to inform underwriting decisions, price policies, settle claims and prevent fraud. It is therefore not surprising that there are many opportunities arising for the use of big data and analytics including AI in insurance.

There are a broad range of uses of big data and AI across the insurance value chain, for example they can be used to improve product offerings for consumers, develop more targeted and personalised marketing campaigns, improve customer experience through automating and digitalising the customer journey and so on. 

The Benefits of AI

There are a broad range of benefits to both consumers and insurers in the greater use of big data and AI insurance including greater ability to understand and model risks thereby enhancing firms’ abilities to understand and assess risks, development of new or enhanced products, automation and reducing the cost of serving customers.

Big data and AI enable insurers to further enhance risk assessment capabilities and therefore consumers that were previously perceived as higher risk such as younger drivers may have greater access to more affordable insurance.

  • AI can help facilitate development of more novel insurance products such as usage-based insurance products, for example the use of vehicle telematic devices for pay how you drive or pay as you drive products. Such products are more tailored to consumers’ needs.
  • Claims processes can be improved, for example, image recognition can be used to automate and speed up the processing of damage related claims and drones can be used for remote claims inspections. This will allow claims to be paid faster and cost less to settle which should in turn reduce premiums.
  • Robotic process automation and optical character recognition can be used to improve insurance processes such as underwriting and claims processes which will enhance customer experience and reduce costs.
  • Similarly, the use of natural language processing, voice recognition and chatbots can facilitate better communication and access to insurance services.

Are There Downsides to Introducing AI?

On the other hand, the increased personalisation and greater ability to understand and model risk could adversely impact the affordability and availability of insurance for some customer segments. However, insurers operate in a highly regulated environment which mitigates this. For example the Consumer Protection Code ensures that they must place the best interests of their customers at the centre of their business model and their decisions.

Other challenges relate to the complexity and potential lack of transparency or explainability of AI algorithms, in particular where the use cases could have a material impact on consumer outcomes or insurers themselves. In such cases heightened governance and oversight of algorithms can be helpful to mitigate touese challenges.

Ethical Considerations for the Use of AI

In addition to potential issues that could arise relating to affordability and availability of insurance for certain cohorts, other key challenges associated with developments in the use of AI include ethical issues with the fairness of the use of data as well as potential issues relating to transparency, performance, explainability and auditability of certain advanced analytical approaches.

However, insurers operate within a comprehensive legislative and regulatory environment which is also applicable to the use of AI within organisations. This includes the Solvency II Directive, the Insurance Distribution Direc­tive, the General Data Protection Regulation and the Consumer Protection Code.

EIOPA, the EU agency tasked with carrying out specific legal, technical or scientific tasks and giving evidence-based advice to help shape informed policies and laws for the EU, has considered the ethical challenges with use of AI in insurance and has developed a suite of governance principles intended to mitigate risks arising from AI uses cases which could have a high impact on consumers or insurers themselves. 

AI and the Bias Risk

Like other approaches reliant on data to build and parameterise models, biases can be inherent in AI. There are many forms of bias and it can be introduced throughout the model development cycle. For example the data on which models are built and trained may not be representative of the intended purpose of the model and hence be biased, the variables used in the model or complex combinations of them inherent in the model could be closely linked to discriminating factors (known as proxy bias) and biases of the model developers could get reflected in the model design and build.

AI faces some additional challenges in this space due to the complexity and opacity of the algorithms, which can make the results less transparent and explainable and hence making it more difficult to detect potential sources of bias.

The risk associated with bias will depend on the materiality of the use case and impact it could have on consumer outcomes or the insurer itself. For example, using AI to automate back-office operations is likely to have a lower impact than the use of advanced analytical approaches to set premiums for customers.  

In terms of mitigation, this is a huge area of research and development that is continually emerging. For higher impact use cases potential mitigants include using more explainable and transparent algorithms, developing metrics to monitor fairness of the model outcomes and ensuring the data used to train the models is accurate, complete and appropriate for its intended use. 

Source: KPMG Ireland

Related Posts