Advanced data science is a valuable tool for addressing significant actuarial function changes. To find out how insurance companies in Switzerland are harnessing its benefits, we at Synpulse joined forces with Dupro to do an in-depth benchmark study of actuarial functions at nine life and/or non-life organisations in Switzerland. The findings will serve as a valuable practical basis for insurers wanting to boost their business by taking data science to the next level.
Significant changes in technology, regulation, markets, customer behaviour, the environment, and other global trends are influencing the actuarial function. The increasing availability of big data and the use of data science are changing the way insights are derived and continue to shape the actuarial operating model.
Data science involves techniques such as data management, data visualisation, predictive analytics, and machine learning. It’s applied in many industries, such as insurance, to extract value from data. Teams with a combination of domain expertise and a knowledge of programming, software tools, mathematics, and statistics are able to extract rich insights from data that can be translated into tangible and quantifiable business value, such as market intelligence, risk assessment, and executive decision-support.
We at Synpulse wanted to find out the extent to which actuarial functions in Switzerland are actually harnessing the business benefits of data science. To do this we joined forces with Dupro to do a benchmarking exercise. This involved structured interviews to investigate how the industry is utilizing data science, with a focus on application and use cases within an actuarial or broader insurance context.
The discussions revolved around the strategy and operating model that companies have adopted within data science, including the types of tools and techniques they’re using. We also included themes around the types of data, the technical nature of the machine learning techniques and software being used, and wider considerations including risks, risk management, governance, and ethics related to data science. Another important focus was to look at trends impacting the skillset required by those working within data science.
Our sample consisted of nine life and/or non-life insurance organisations based in Switzerland. We interviewed representatives from first- and second-line departments – mainly heads of actuarial departments – including the data and analytics corporate function.
Data Science Priority and Use Cases:
Data science isn’t at the top of the agenda for many actuarial departments, although four said that it was becoming a higher or even a very high priority. One respondent described the priority of data science within their departments low, with no value-adding use cases identified so far. Data science is used in diverse areas of insurance: underwriting and pricing, reserving and reporting, risk management, product management, marketing, and claims management.
The use cases for data science are currently being observed more in the broader organisation and include customer targeting, cross-selling/up-selling, understanding customer behavior, and claims management. The extent to which data science is applied in these use cases varies depending on the nature of the department, the data science problems it faces, and the specific techniques and skills applied. Nevertheless, within the actuarial departments are different domains where data science is applied. The perceived value correlates with the required effort. Pricing and telematics are seen as the ones with the highest value but require the most effort (see figure below).
Centralised vs. decentralised:
Different companies adopt different types of structures for their data science capabilities. Data science is usually organised at the business line level, with a mix of centralised and BU-specific experts and/or a centralised data science function. However, the trend over the past few years, particularly at larger organisations, has been to centralise (part of) the initiatives in a data science function.
Level of skills/competence:
The majority of respondents rated their department’s competence in data science as (very) high or just average. We observed that the level of competence is usually correlated with the size of the team. More than half of participants have been using data science for more than one year, while two of them are not currently using it. It emerged that the traditional skillset of actuarial functions does not easily lend itself to making full use of data science techniques (on average they have lower computer science, data architectural, and information technology skills). When asked to rank the capabilities of the people within the function, traditional actuarial fields including mathematics and statistics scored high on average, as did business domain knowledge, while computer science, information technology, and data architectural capabilities scored only medium. Given the type of departments we surveyed, this was more or less to be expected.
When asked about data science challenges of the insurer in general, two of the top 5 answers (figure below) relate to data challenges: data accuracy issues and difficulty in accessing data. In addition, most actuarial departments mentioned the challenge of sourcing data and obtaining and aggregating data from multiple internal data sources, and the fact that it hinders the efficiency of operational processes as one of the greatest barriers to increase the application of data science. Some respondents commented that the data accuracy issues stemmed from decentralised data management and the fact that data came from different internal and external sources and systems. Respondents who do not (yet) use data science say they are facing issues related to an insufficient data base, leading to reprioritisation of such initiatives. This challenge affects actuarial departments, and was named as one of the main barriers to adopting data science, together with a partial lack of internal talent and expertise. Getting the right data and the right people to gain insights from data, and proving the value of data science initiatives, is therefore the key in successfully harnessing actuarial data science.
Tools and techniques:
To understand the tools, and techniques in use in actuarial departments, we focused on two distinct components of the data science workflow: data management, analytics, and visualisation and data modeling. There’s still widespread use of Excel for data management and visualisation. The majority of our respondents utilise open-source programming languages for data science model building. R is the clear leader, but Python is also widely used in actuarial departments. We observed broad usage of advanced machine learning techniques in diverse settings. Several companies (mostly P&C) already have a higher acceptance of data science techniques. They, therefore, do more experimental testing and analysis of different techniques besides using classic statistics.
From a practical business point of view, the most important conclusions of our survey are as follows:
Increasing potential of internal data to make full use of your existing data is key for data science applications. Another clear finding of our study is that operational inefficiencies can prevent the actuarial function from carrying out analysis that actually adds value. They can improve efficiency by using automation experts or technology to reduce manual processes and automating regular, routine-based activity. The upside of greater automation is increased effectiveness. This comes from making better use of actuaries for value-adding analysis and insights, utilising advanced data science skills to refine analysis and prediction, and improving ways of working through other transformation initiatives, for example IFRS 17 or Solvency II.
If you’d like to go further into the possibilities of data science for your actuarial function, check out the executive summary or contact us for the full report as well as a more in-depth conversation.