Skip to content

Does Getting Cyber Insurance Have to be so Painful?

A short summary on the history of cyber insurance underwriting. What it is, how it got so complicated, and what can be done about it.

Perspective

Over the past several years, it’s become increasingly difficult to apply for and renew cyber insurance policies. The market for cyber insurance is more volatile than it’s ever been – due to an increase in both the total number of claims being made and the types of cyber threats that policyholders face. In response to increased risk, cyber insurers have become more stringent in their underwriting procedures, which has in turn forced policyholders to devote longer hours and more resources toward the approval process – all while remaining uncertain of the outcome.

It’s possible that the policy underwriting process will be streamlined in the coming years, as the cyber insurance sector continues to mature, and the speed of change recedes. But in the meantime, it’s crucial for organizations to fully grasp the process of cyber insurance underwriting and search for ways to minimize the administrative burden on their teams.

A brief-but-fascinating history of cyber insurance underwriting

Cyber insurance is a relatively recent segment of the insurance market. While life and property insurance are each over 250 years old – and even auto insurance has existed for over 125 years – the first recognized ‘internet security liability policy’ was written by AIG in 1997. In fact, the recency of this segment is part of why cyber insurance underwriting can be so complex. In an industry driven by historical data analysis and predicting forward trends, this lack of history is an impediment to modeling future risk expectations.

In the early days, loss projections were based almost entirely on gut instincts and trial and error. Very quickly, that evolved into macro views, where claims expectations were based on overall market losses applied across a pool of insureds. But as cyber events became more common and threat actors more sophisticated, three things began to happen:

  1. Claims exceeded projections, which drove up loss ratios and reduced profitability
  2. Insurers observed that the risk of loss was concentrated among a subset of policyholders, rather than spread evenly among them
  3. Insurers became concerned about systemic or correlation risk, where a loss on one policy increased the likelihood of claims against other policies

In the face of these new challenges, insurers needed to add a micro component to risk determination. Carriers had to analyze fully the risk posture of each individual policy applicant – and could no longer rely on an understanding of the overall market risk.

Where we are now: The current state of underwriting

Thus began the development of today’s underwriting process. Applications became longer and more complex, requiring additional questionnaires and supplements. Then came detailed conversations, interviews, and site visits. On top of these changes, carriers began utilizing “outside-in” scans of an applicant’s environment, often provided by an external third party. And organizations were soon required to meet specific threshold conditions – such as the deployment of multifactor authentication (MFA) and endpoint detection and response (EDR) capabilities – before an insurer would approve a policy.

While these underwriting enhancements added some level of individualized risk assessment, several obvious shortfalls remain. Though IT estates are in a constant state of flux throughout the policy period, the underwriting review occurs at a single point in time. And even with the best of intentions, individuals answering questions and completing forms may not always have complete information to accurately respond. This is compounded by questions being structured in a way to allow easy aggregation and analysis, but which can preclude or obscure nuanced responses. And while a response may be accurate for the bulk of an estate, it’s usually the weakest link that allows a breach to occur.

At the same time, while insurers collected more data and progressed the sophistication of their cyber risk analysis, they adjusted contract terms. This included the obvious increase in policy premiums to reflect greater loss ratios and claims volatility. Also, further scrutiny was placed on policy exclusions, particularly around “Act of War” clauses.

But in some cases, overall risk acceptance lowered, leading to reduced individual policy limits – and in some cases, non-renewals. This last change drove the search for alternative sources of risk absorption, such as deeper reinsurance markets, or through the capital markets using Insurance Linked Securities (ILS), sometimes referred to as Catastrophe Bonds.

Today, underwriting is a lengthy, manually driven, time-consuming process of data gathering and due diligence. But we’re also in a place, because of the recent volatility in pricing and policy terms and limits, where the relationships between carriers and policyholders can sometimes be strained. During the underwriting process, each may view the other with some level of suspicion.

Where we want to go: Desired evolution

In our view, the first (and most important) change that needs to come broadly to the market and underwriting process is a mutual view of partnership among insurers and their customers. Each side needs to agree that risk reduction is in their mutual interest – and robust risk analysis resulting in fair pricing and terms is the desired outcome. It’s in the interest of all market participants for this form of risk transference to be available for those who choose to use it. That requires both acceptable returns to those accepting the risk (the insurers) and acceptable terms to those transferring their risk (the insured).

If this spirit of partnership can be developed, we see the next significant step to enhancing the underwriting process being the sharing of electronically gathered metrics from inside the firewall regarding the cyber posture of the insured entity. Unlike manually completed questionnaires, electronically gathered data can provide a more accurate snapshot of the environment and can greatly reduce the time and effort required to convey a clear picture of the environment. The targeted metrics must be appropriate to the insurer’s risk analysis and provided at an appropriate level of abstraction (i.e., proportions of an environment displaying a certain characteristic, rather than individual device or host data).

Ideally, having received the metrics from the applicant, the insurer would respond with details around key findings and prioritized remediation advice. Upon remediation, the applicant could resubmit the metrics for final determination of policy terms. And with a streamlined process removing many of the arduous and time-consuming steps, we can envision a time where data monitoring and remediation steps are taken more regularly than on an annual basis – serving to further reduce risk, claims, and pricing.

The challenges we must face together

The growing pains encountered by insurers in the early days of the development of the market led to some unfortunate outcomes. As loss ratios soared upon increased incidence of attack – particularly, though not exclusively, around ransomware – profitability suffered. In response, carriers reduced risk and increased pricing. The resulting volatility in cost and availability of cyber insurance created distrust among buyers in the market.

In the current environment, a strategy that attempts to enhance data collection from insureds may prove challenging. Additionally, as the market settles and risk limits are relaxed somewhat, competition among insurers is heightened. This means a particular insurer attempting to enhance its method of data gathering risks losing business to other insurers without such demands.

This is a classic “chicken-and-egg” scenario, compounded by skepticism in the buyer’s market. In the face of this, insurers must drive a message of partnership with their customers. The theme of better data sharing must be wrapped around a message of reducing risk overall, sharing of expertise (both prevention and incident response), and overall efficiency of the underwriting process. Part of this is the notion that electronic data sharing isn’t aiming to collect new and different information than is currently collected through paper-based processes. Rather, it’s a more efficient, more accurate, and more timely way to gather data.

Certainly, the overall aim is to drive a better understanding of the risk of loss and tailor policy terms accordingly. But applicants must be persuaded that the process will lead to better outcomes – and not penalties for disclosed shortcomings. If they can be convinced that the insurers want to work with them towards remediation and risk reduction, perhaps a willingness to participate will result. A further consideration in this highly regulated product segment is the role that regulators, industry groups, and possibly brokers can play. This alignment would push a common outcome that improves the market for all participants.


To learn more about cyber insurance and strategies for ensuring that your policy pays when it should, check out this eBook: Tanium for Cyber Insurance.

Mark Millender

Mark Millender, Senior Advisor, Global Executive Engagement joined Tanium in 2020 following a successful career in banking spanning more than 20 years. Most recently having worked as Head of Diversified Industrials at Lloyds Bank, Mark was previously a Managing Director at The Royal Bank of Scotland and started his banking career spending 11 years at Bear Stearns.

Tanium Subscription Center

Get Tanium digests straight to your inbox, including the latest thought leadership, industry news and best practices for IT security and operations.

SUBSCRIBE NOW