2016-12-14

The insurance industry is becoming increasingly focused on the digitalization of its business processes. There are many factors driving digitalization, but it’s clear that a reliable and meaningful database is the basic prerequisite successful digitalization strategy.

Insurance companies are increasingly prioritizing digitalization, not because this issue is currently "in," but because these twelve changes in the insurance business process environment are forcing companies to act:

Knowledge about customer behavior is becoming even more important.

Many insurance companies have already invested in IT solutions for customer intelligence and customer analytics in recent years. In the future, the identification of current customer requirements will be essential for a successful sales strategy and long-term customer loyalty. This makes more accurate predictions of customer behavior even more important:

How has the customer (possibly including his peer group) behaved in the past? What were the reasons for this? How will the customer behave in the future? Which current conditions are relevant for this?

A 360-degree customer profile today includes all available customer and business party information: data from party, core, debt collection and claims management system plus business analytics data warehouse as well as available external information (social media, google maps, blogs and more).

New distribution channels are replacing traditional ones.

Customers today expect their insurance companies to be accessible at all times (online, call centers, local personnel, etc.). Thus, online sales and traditional distribution channels (agency, broker, sales department) merge and multi-channel management creates difficult challenges for insurance IT.

Changing conditions of competition.

New digital business models ('insurtechs') are pushing into the insurance market with innovative offers that insurance companies must react to:

Online broker and comparison portals are gaining market shares in new business.

Peer-to-peer (P2P) insurers are trying to replace traditional insurance models in some areas. For example, Friendsurance is taking a social community approach.

Cost and customer requirements are driving product innovation.

Falling interest rates and changing customer requirements require the development of new and innovative insurance products:

Customizable insurance policies, e.g. a life insurance policy with flexible investment options for savings part of premium.

Linking the internet of things (IoT) and insurance products will grow and is already prevalent today in auto, home, life and health insurance.

Cost optimization is required to reduce loss ratios and administrative costs.

Altered basic conditions (e.g., increasing damage rates, declining interest rates) are forcing cost reduction, e.g. implementing of new analytical IT solutions and process automation, for example:

Claims prediction using predictive modeling for property insurance, ideally in combination with IoT solutions for claim avoidance.

Improved fraud detection for all lines of business using analysis tools and analytical methods.

Optimization of management functions by automating business processes (for example, in application examination).

Legal requirements require a fundamental modernization of insurance IT processes.

Regulatory and legal requirements are defining new standards for the IT systems of an insurance company, for example:

Solvency 2 includes stricter requirements for the transparency of the IT processes involved.

The introduction of the EU General Data Protection Regulation (GDPR) requires fundamental adjustments to the data management processes relating to the processing of personal data in information systems.

Implementation of a 360-degree customer and party profile

All of the digitization drivers listed above represent substantial data management challenges for IT departments throughout the insurance industry. In the meantime, most insurance companies have implemented 'party systems', which uniquely identify a business party, regardless of the role relationship he or she is in the insurance business process (for example, customer = policyholder, contributor, intermediary/producer, external service provider).

In practice, however, many companies still have multiple and incorrect information on parties due to data quality problems and historically grown IT structures. Therefore, data cleansing is a must (see also Don't let your data warehouse be a data labyrinth!).

A further important challenge is using not just internally-stored customer information, but also to evaluate externally available data, and storing the insights from it in suitable structures. This includes geo-information (for example from Google Maps), as well as information from social networks and blogs. Since this information is generally not well-structured, new data management concepts are required (for example, based on HADOOP or HANA), as well as access mechanisms that must be incorporated into ETL processes and analytical evaluations.

Implementation of multi-channel management

The requirements resulting from multi-sales channel management are likely to put the final nail in the coffin of the data silos in insurance companies. Until now it was tolerable that internal departments, brokers and agencies had different databases available for their sales activities (in some cases still line-of-business-oriented), now this is the killer criterion for a multi-channel strategy.

New concepts (such as a customer decision hub, which provides all channels with complete customer history in a consolidated, quality-assured version) will have to replace existing systems. The basis for this is, for example, a business-analytics data warehouse with a 360-degree view of all party information.

Pressure from new competitors

New competitors are not only a threat to insurance, but also an opportunity:

P2P products will likely secure larger risks through cooperation partners (= insurance companies).

Online comparison portals prefer high-value products, which, of course, can also be offered by established insurers.

In both cases, it‘s a great advantage for insurance companies if they‘re able to exchange data with Insurtech's IT systems via standard interfaces. Insurance companies, which have also already cooperated with insurance brokers, will have an advantage over companies with exclusive distribution channels. Furthermore, companies that have already implemented standard structures like ACORD in their data systems (see also Advantages of a standard insurance data model).

Introduction of product innovations

Customizable insurance products, which in some cases also require a link to raw data and/or scoring results from IoT applications, define extended requirements for the data management concepts of insurance policy and risk information.

This relates to core and rate-making systems, as well as to already existing dispositive systems, e.g. Data Warehouse implementations (see also Big data, IoT and data warehouse?).

Without extensive modernization of the data systems in question, insurance companies will find it very difficult to introduce product innovations successfully.

Cost optimization through claim prediction and improved fraud detection

IT optimization concepts in the context of the topic of claims necessarily lead to the topic of data:

Claims prediction using predictive modeling with analytical models is only as good as the underlying database. If the data of historical claims cannot be evaluated correctly, the analytical models will not yield accurate scoring values. Data quality is therefore an important prerequisite.

Fraud detection by analytical methods requires the unambiguous identification of all involved persons as well as all claims objects over a possibly longer claims history. This requires data structures in the underlying data sources (for example, a data warehouse) that allow this identification in the data model and in the data contents - thus a demanding requirement for data management.

Modernization of IT due to new regulations

The increasing regulatory and legal requirements for insurance compel companies to fundamentally renew their data processes. As a rule, the requirements cannot be served by a simple expansion or adaptation of software programs and IT infrastructures, especially ones that have grown piecemeal over decades and are often outdated. A fundamental modernization of the data management concepts is required:

Solvency II requires that the underlying rules and all source information (data fields and source systems) be documented at any time and for every information component of the extensive reports. The resulting efforts to change and extend proprietary developments is immense and requirements are generally not met by existing IT solutions. Therefore, new data management solutions are necessary, providing a comprehensive impact analysis -- optimally automated and supported by a metadata solution and a business glossary.

With EU General Data Protection Regulation, insurance companies must ensure that personal data in the information systems are no longer displayed to all users in clear text. The necessary anonymization or pseudonymization routines require either complex adaptations in existing ETL programs or the use of data management solutions designed for this purpose, which can solve this task semi-automatically for any data source. In many cases, it is also a particular challenge to recognize where personal data are stored at all in the branched data landscape of an insurance company. Again, an intelligent data management solution may help a lot.

Summary

It becomes clear that the requirements for data management processes, including the underlying data structures, are increasingly demanding in the context of the implementation of digitization strategies. This is why it is becoming increasingly important for insurance companies to implement new, powerful and flexible data management concepts.

Learn more about the ability of SAS to help the insurance industry meet these digitalization requirements through data management solutions and optimized data architectures.

Show more