Is it important for non-life insurance companies working with individual underwriting (UW) of major industrial and commercial risks, to have experienced underwriters employed? YES, most of them will respond - and this is undeniable.
But how do companies ensure in times of increased use of digitization and use of data that this experience is systematized, maintained and used by new generations of underwriters? How do they ensure a consistent monitoring of rate development and good cooperation with analysis and pricing resources in the company? There are many important questions to ask yourself ... in this article we will suggest some answers to these important questions.
Some time ago, a large group of industrial underwriters from the same company sat together at a network meeting. The agenda was wide and open, and a chief underwriter for the Swedish part of the business told about the last two years of success in the Swedish corporate market. An interested underwriter from another part of the company asked: "What is the reason for your great success on the market?" The Swedish chief underwriter hesitated for a short moment, then he exclaimed: "There are three things: Experience, Experience, Experience". Three months after this meeting, this chief underwriter and his two most experienced colleagues were employed by a competitor.
Another place in Scandinavia, the management group of the Corporate Department received input from the individual UW units that the property rates on international business were "on their way down". When management inquired about how much the rates had decreased in the portfolio during the last year, there were no consistent data points available from the individual UW units, so it was not possible to create a data-supported picture of the rate development of the risks in the portfolio. Therefore, it was not possible to report the details of the rate changes and it was therefore uncertain how much the rate of development in the market had affected the new business and the portfolio.
What can we learn from these stories? In any case, we can learn that the immediate results, qualifications and experience in the UW area are not necessarily a guarantee of future success. UW has future-proofed its success in the market to exactly the same extent as it has put effort into strengthening and developing the technical underwriting. Neither more nor less.
And what is meant by technical underwriting? - It is any structured collection of the underwriter’s experiences and data collection that can support a foundation for UW decisions in the future. Data - in the form of a systematic and transparent collection of the underwriters decisions. And finally, data - for use as future support in the decision making process for the underwriter. All of these, linked to a common picture of portfolio understanding / performance, and support for a re-evaluation of the starting point – the input rates.
Why is it important with technical underwriting? Two main reasons!
The one reason is as follows: If underwriters don’t have a satisfactory level of technical underwriting, they lack the foundation theyneed to keep the UW discipline on a high level. The alternative to good technical underwriting quickly becomes "market prices", which are determined on the basis of available market information from brokers and customers. Lacking the technical foundation, underwriters can, especially in a soft market, become ignorant "victims" of a bottomless, downward spiral, where prices will be lower and lower and undermine the foundation for profitable growth.
The other reason that it is important with technical underwriting is that one can alternatively be a magnet for anti-selection. A simple and classic example: We can imagine an insurance company that does not differentiate prices for houses with thatched roofs and houses with tile roof, even though there is a greater likelihood of fire damage on houses with thatched roof.
The customers who have thatched roofs will be more likely to buy insurance in this particular company (and not to competitors who take the higher risk into consideration) and this will result in a negative UW result. The consequence of this may be that the company increases prices for all properties, which further increases the negative selection. This happens because customers with "ordinary" risks will tend to move their insurance to another company - and therefore the insurance company is left with the "bad" risk. This is a vicious circle, which ultimately can lead to collapse of the insurance company. In contrast, the companies that have a relatively strong technical foundation for their UW processes, will protect themselves from anti-selection.
Understand the input rates
Input rates are the basis of the UW process - and are usually derived from actuarial calculations - without the underwriters being involved as such. This does not necessarily mean that the calculation has been made by an actuary, but that there is an expectation that the calculation is based on a mathematical model. However, the person performing the calculation should have a certain mathematical understanding and background. That is, in other words, the premium you would calculate if you were to calculate a risk "all over again" and argue for the individual price elements.
In general, the input premium will be delivered by a rating model that calculates a premium based on available risk factors. Typically, the calculation is divided into different components and below are the most important elements of the tariff premium.
The input premium model uses the company's own historical damage data and combines it with relevant market data.
The input premium is based on the best estimate and applies to all elements of the price.
In order to determine the correct price, it is important that the underwriter has insight into the construction of the input premium and is aware of the components of a premium and the proportions of the individual components for the particular product for which premium is calculated.
Improve input rates
Can the underwriters help with improving input rates? Yes, to a great extent! Unfortunately, in many companies price actuaries and underwriters are often isolated in their individual watertight silos. There should be a regular exchange of data and views between these two parties. There should be time and resources so that both parties can meet and converse. What is the large claim allocation in the input rates? What is the target claims ratio? Are there any new variables that should be measured? Does the claims picture show a new trend?
As the underwriters use input rates in practice, they will eventually get a clear sense of whether the input rates are in line with the risk or whether there is a need for a general adjustment of the rates so that they provide a reasonable basis for the further UW adjustment. This is often the case when input rates are loaded with the general surcharge for "discounts". Input rates should be aligned as closely as possible to the specific risk.
Now we are at the actual "nerve" or the “engine room” in the underwriting process. The market places high demands on the underwriters to be sharp, competitive, accurate - and the like. And precisely in this part of the process, it is particularly important to be sharp, precise, and not leave anything to chance.
The technical price is usually based on the input price - and the technical price is an important element, especially in large corporate insurance.
What is the technical rate then? Yes, it is actually the input premium – however, adjusted with specific individual risk factors that the underwriter considers. These must by nature, be factors that are not included in the calculation of the input premium / tariff.
The underwriter is the one who is responsible for setting the technical price. The adjustments to the input price - which gives the technical price - can be both lower and higher than the input premium. When an underwriter is looking at a risk, he must establish objective criteria for the risk assessment.
Objective Criteria: Work Descriptions / Routines, Construction, Production Processes, Damage Prevention, Claims history, Geography, Individual Endorsements, Exposure, Deductible Size, Risk Type, Maintenance, Risk Management.
The underwriter's assessment is often subjective in the actual size / extent of the adjustment, but must be based on an objective foundation. The more support and help given in the form of models or tools, the more you avoid the UW adjustments being affected by bias and ending like a black box that no one really has the energy to open up.
There is a pitfall connected to the preparation of technical prices. It is that the underwriter considers parameters that are already included in the tariff. However, in special situations, there may be a reasonable justification for affecting existing tariff parameters. Therefore, it is important for the underwriter to document all UW adjustments so that adjustments which have been made are known and documented. Furthermore, it is important to have a consistent approach so that all underwriters treat the same risk features according to the same rules. This could be done by a uniform exchange of experience and / or logging / data collection of underwriting decisions. Finally, it will be advantageous with a UW tool with predefined UW adjustment options as support to the process.
How much can you rely on claims history?
The big question that the underwriter needs to ask, is how much weight can be put on a concrete claims history in the individual cases. In recent years, when the tariff has been more or less abolished as a key tool, more and more emphasis is placed on a customer's or a portfolio's individual claims history. There are important tools for the underwriters to handle the inherent uncertainty that lies in the data being analyzed. Historical data is used to predict what can be expected in the future, but the data available is basically due to coincidences because the process that generates injuries in insurance is random. Both the number of injuries and the size of these can therefore vary widely from one period to another. One cannot expect the future to be the same as the past.
The following general problem occurs very often when an underwriter tries to investigate average claims or average claims ratios for a number of years: Data shows, for example, 4 years of very good claims history - but the fifth year has a very high claims rate. The average over the five years may come close to target claims rates, but what unfortunately often happens is that one or more of those looking at the result will say that the last year is exceptionally bad and that one should ignore this year in the calculation of average damage. This is a big mistake! When there are only a few data points, this pattern of fluctuation in claims data is exactly what one can expect to see. Most years will look much better than average and much better than the individual bad years. That does not mean we should ignore the bad years, but that our data volume is relatively small.
If, for example, there has been paid 1.5 million EUR in claims last year, one might expect that the same amount, 1.5 million EUR of claims to be paid again this year for the same group of policies. However, the expected accuracy of our estimate is a function of fluctuations in the underlying claims (variability). By using credibility theory, one can estimate the randomness that is built into data and then calculate a numerical weight that can be linked to the claims history / data.
When the data amounts are uncertain or small, it is important to add additional information in the calculation as an addition to the assessment of, for example, average claims.
The basic formula for calculating credibility weighted estimates is:
Estimate = Z * [Observation] + (1-Z) * [Other observation]
And Z is always between 0 and 1.
If our body of data is so large that we can give full weight to it in making our estimate, then we would set Z=1. If data is not fully credible, then Z would be a number somewhere between 0 and 1.
If the data we have available in our datasets is very credible (not large fluctuations or very many data) Z will approach 1. If Z is very low, we need to use our input rates or technical rates to determine the premium.
- Observed data for a smaller data set may help predict future expectations for claims, but there is an element of coincidence. The smaller the data, the more likely the result is misleading / wrong and looks better than the actual claims average.
- Results for larger datasets have far greater statistical security, but the datasets that lie behind may have characteristics other than the actual subgroup we are investigating and calculating.
- We often need to combine (blend) these two in order to make the best possible use of available data.
When we have defined the final technical premium, it is time to make a simple "test" to find out how the gross premium broken down on the various premium elements should be allocated - and whether this allocation allows for reassessment. Here, one should reflect the specific risk against the usual percentage of portfolio allocations for the various premium elements.
Profit / Capital allocation
Claims handling costs
Small claims / frequency claims
Small claims / frequency claims
Looking at the expected specific allocation as a numeric amount, we can compare this against the current claims history to get an idea of whether the "money fits" or not. This can for example be claims below 1 million EUR over the last 10 years. The more exposure (policy year, price, claims) that is on the customer, the more weight can be laid on the individual claims data.
These are the major claims that affect the policy, for example, damages of more than 1 million EUR. To get a picture of this, a comparison must be made with the portfolio and see major claims should normally be placed on similar risks. If data is not available for the specific risk, consideration should be given to whether the risk requires more or less large claim allocation, compared to the portfolio (risk of major claims or not). A good general claims scenario cannot normally justify the discount on the premium for the major injury.
Here, it often would be worth taking advantage of the reinsurers models for calculating major weather events based on the company's specific risk exposure.
Claims handling costs
There are costs associated with all damages, whether you use your own claims agents or if you use external ones. These are the directly allocable damage-related costs, eg. costs for claims processing, taxators, auditors, etc. related to claims. Here it is possible to assess whether the specific case will get more or less costload than an average case.
This includes a large number of expenses, including wages to underwriters, managers, staff functions, and other expenses such as marketing, travel expenses and the like. Here, one should remember to include both fixed costs and variable costs (selling costs). Any bonus or fee to the customer must also be added to this parameter. It would be beneficial to assess whether the level of actual customer's sales cost or administration is more or less than the average.
Reassurance should always be considered as a net expense and this will of course be financed through the premium. Of course, it is only the average net reinsurance fee to be included and not the total gross insurance cost.
The capital cost is the amount needed to return the allocated capital with the company's capital requirement. This is also called the profit.
In conclusion, it should be emphasized that while premium allocation is an important tool in the premium determination, it is important not to adjust for items already assessed in the technical pricing. If the claims history, for example, has been part of the assessment of the technical premium, it will not help to adjust the small claims amount again, after assessing the premium in relation to the premium allocation.
As the clever reader may have noticed, this article does not at all deal with "market discounts". This is not part of the premium allocation as there is really no room for this cost element in our premium. The more technical underwriting is done and the better the technical underwriting works, the less there is a need for concepts and discussions about market discounts. In the end, it may be a discussion of "going down" on the walk-away premium in some political cases or renewals and a discussion about whether individual cases should not contribute marginally to fixed costs or whether individual cases should not generate the same profit as others.
Monitoring of rate adequacy - an important key figure
There is often a great deal of focus on measuring UW results from month to month - and we can easily get into a profitable portfolio for a while - but suddenly it's as if the claims roll in and it may look quite different in a short time. A portfolio exposed for large claims is by nature volatile (though dependent on industry) and the portfolio quality should not be "measured" on the UW result alone.
Regardless of the fact that actuaries take care of the input premium and the underwriters take care of the technical premiums, this is not necessarily the ultimate truth. The current premium is the ultimate truth!
This is the premium that ends up in the policy. The actual premium may be "historical" and may be included in for example segment discounts, group agreement adjustments, and market adjustments over time.
Should we monitor the true health and profitability of a policy or portfolio, it makes sense to create an expression of "adequacy". The closer the formula Actual Price / Technical Price is at 1.0 the better it seems to be with the profitability of the individual policy or portfolio.
The ratio between Actual Price and Technical Price is an important key to which it is worthwhile for any underwriter to follow on a regular basis.
The more consistent and transparent the UW process goes from the input premium over the technical price to the actual premium, the better the quality is obtained in measuring the tolerance. There are three levels that rate adequacy provides great value to measure: profitability in each case, portfolio and also the rating over time (eg from one year to another). If the rate adequacy develops from 0.92 to 0.98 from one year to the next, it means that the underlying profitability is improved by more than 5% and profitability over time can be expected to be strengthened.
It is vital for both insurance companies and employees to build greater knowledge of underwriting and to focus on this important part of the business. It is important to create better tools and better understand the processes that are likely to provide successful development for both the company and the employees.
Time flies and it can be difficult to keep up with the many tasks and projects - and do something extra for increased quality and systematics in the underwriting processes. It is also often a challenge to "lift" new colleagues up to the same level as those who have been with it for many years and who have experience. It is therefore our hope that this contribution may hopefully help those of our colleagues around the industry seeking overview, insights and tools to help develop their UW tools and themselves as an underwriter.
It is up to you who read this article, where your focus should be on developing the technical underwriting: is it going to be developing new input rates? Should it be put on a UW tool that collects rationale and data? Is it necessary to understand what a claims history can tell about the risk, or is it a tool for premium allocation in individual cases?
Surely, it is no longer an option to let "time stand still" and do as we have always done. Our competitors will most likely over time take the new tools and methods into use. Good luck to all good colleagues out there!