AI: Friend or foe?

Artificial Intelligence (AI) may have various benefits, but its sudden rise up the table in one insurer’s annual global risk survey is a warning sign to both the public and the insurance sector to beware of some aspects of AI.

AI and big data, as a named risk, has risen from 14th to 4th place in a table of ‘future’ threats to watch, according to global insurance experts. This may reflect the faster-than-expected growth in Chat GPT and artificial general intelligence, but 64% of experts now believe there should be a break in research into AI and other disruptive technologies, with 55% seeing AI as a ‘rapidly emerging risk’ and 34% viewing it as an existential threat to mankind.

However, the general public has other concerns, with many still worried about the pandemic, ranking this as their second future risk after climate change. The public only assigned 11th place to the risk posed by AI and big data, although this still constitutes an 8-place rise from 19th position last year.

Half the general public believes AI presents more risk than opportunity. Both the public and experts feel neither the public nor private sector is prepared for the risk that AI presents. Only 7% of experts feel the public sector is ready for AI’s negative impacts, and slightly more positively, 16% feel this is true of the private sector.

In just 12 months, the view on how public authorities should deal with the AI threat has shifted from ‘collaboration with the private sector’ to ‘regulation’ – now favoured by 48%, up 33% points.

Ensuring AI is equitable and free from biased decision-making is a clear goal. Achieving this in insurance claims handling has led to the launch on 31 January 2024 of a voluntary code of conduct, which will govern the development, implementation and use of artificial intelligence in claims handling. The goal is to ensure that AI is implemented transparently, safely and securely.

Its creators recognise AI has the capacity to speed up and streamline the claims lifecycle but equally want to avoid issues of bias in claims. They seek justification behind claims decisions and a form of redress if the claimant believes they were short-changed.

In-built bias could also lead to future legal issues. On the plus side, the capacity to rapidly analyse vast volumes of data could lead to more attractive insurance pricing and a more precise assessment and evaluation of risk. New insurance products could emerge, serving those who have struggled to access insurance support.

Another possible positive impact could result from AI’s ability to identify fraudulent claims, for which all insureds ultimately pay a price in terms of higher premiums. It can also quickly assess whether quotes for repairs are inflated or accurate damage reparation calculations.

However, AI requires large volumes of data, and with that comes a risk of data breaches. The National Cyber Security Centre (NCSC) is definitely not underplaying the risks presented by AI. It states that AI will “almost certainly increase the volume and heighten the impact of cyber attacks over the next two years,” although impacts will be uneven. This will be achieved through the evolution and enhancement of existing tactics, technologies, and procedures.

It highlights how AI offers an uplift in reconnaissance and social engineering capabilities, making both harder to detect. The NCSC also talks of AI “lowering the barrier” for novice cybercriminals, hackers, or hacktivists, simplifying their ‘job’ for them. It can certainly remove the grammatical mistakes, translation errors and spelling gaffes that currently allow many phishing attempts to be foiled.

A recent BBS News investigation discovered that a new OpenAI product enabled users to build a customised version of ChatGPT “for almost anything,” including highly convincing email, texts, and social media posts or use in scams and hacks. The BBS easily constructed its own bot, which could have effectively operated in multiple languages, as a scam or hacking tool; OpenAI has said it will investigate.

With other warnings about AI’s ability to generate malware that can pass through filters and identify vulnerable devices with out-of-date software, it is time for the UK and British businesses to increase cyber-resilience. AI’s risks will have to be offset by enhanced cyber hygiene, more comprehensive staff training, and tech company support. The prediction is, however, for a raised global ransomware threat over the next two years.

To discuss the relationship between AI and insurance claims handling further, get in touch with L Wood Insurance Brokers.