Flood Risk Modeling Challenges Re/insurers Should Consider

Modeling Challenges for Hurricane Flood Risk | Claims Magazine

Federico Waisman, senior vice president and head of analytics for Ariel Re, discusses flood risk modeling factors re/insurers must consider.

Hurricane modeling program

By Federico Waisman, Ph.D.

This article, republished with permission, originally appeared on pages 28–31 of the March 2019 issue of Claims magazine. ©2019 ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.

The first generation of U.S. flood models was introduced into the market only last year. While barely up and running, their developers face serious challenges already as a result of the lack of good data, which is essential in carrying out accurate probabilistic modeling, since it enables a well-constructed and ‑calibrated model to deliver results that match experience over the years.

Unfortunately for the re/insurance sector, even for mature perils such as Florida windstorm, the data is never entirely adequate. Worse still, in the U.S., flood data is almost non-existent. Making the job even tougher is that a model only works with all things being equal. If the severity or frequency of events is changing, modelers may feel their chances of hitting the mark are dwindling.

Weather and climate change

It is easy to presume from hurricanes Harvey and Florence – two of the ten wettest storms ever to make landfall in the U.S. – that climate change is making U.S. hurricanes wetter. Yet what happened in the past may not be a useful guide to the future. Inland flooding due to hurricanes may indeed be growing more severe due to climate change. But, then again, maybe not. The science does not provide a definitive answer.

First, let’s look at some of the evidence that points to an impact. In a November 2018 article in Nature, Patricola and Wehner found that “relative to pre-industrial conditions, climate change so far has enhanced the average and extreme rainfall of hurricanes Katrina, Irma and Maria” and that “future anthropogenic warming would robustly increase the wind speed and rainfall of 11 of 13 intense tropical cyclones.”

The scientists, from the Climate and Ecosystem Sciences Division at Berkeley Lab, calculate that recent hurricanes dropped 10 to 30 percent more rain than similar storms in pre-industrial times; and they predict the rising trend will continue. That may be the case – other academic papers draw similar conclusions.

Another piece of evidence is the fact that the world’s oceans are getting warmer, (with ocean heat fueling tropical cyclones) and so is the atmosphere. Warmer air also holds more moisture: each additional 1C causes a 7% increase in moisture retention.

In another Nature article last year, James Kossin showed that the land speed of cyclones between 1940 and 2016 has gone down 10% globally, and 16% for North American hurricanes. The longer a hurricane remains over land, the higher the total precipitation.

Meanwhile, atmospheric blocks – areas of high pressure that cause hurricanes to remain stationary for long periods and therefore dump more rain – are on the increase, possibly driven by climate change.

All of the evidence seems to show that climate change and global warming might be producing wetter cyclones; however, this may also be a mix of interannual variability and long-term cycles.

Maybe wetter. Maybe not.

Harvey was indeed the wettest ever hurricane, but you have to go back all the way to 1950 to get the third-wettest ever for the continental U.S., which was Hurricane Easy. The second wettest was in 1978, and the ninth back in 1960. So, we’re having to criss-cross backwards and forwards through the decades in order to rank these storms.

Before 1960, measurement was relatively rudimentary, and much less widespread – so, including pre-war events in the ‘wettest-ever’ table is problematic. Meanwhile, recent evidence does not constitute a proverbial smoking gun. Whilst Florence last year is the 8th on this list, it is hard to judge the recent evidence regarding the top of the list where Harvey and Florence are the third pair of consecutive years in the top 10 list, as it was previously with 1978 Amelia–1979 Claudette and 1997 Danny–1998 Georges. There are roughly two decades between pairs of extreme wet hurricanes on the top 10 list (top 8 actually). Again, is this climate change and global warming driven or just interannual variability?

Factors for flooding due to hurricane precipitation other than climate change impact insurance claims. Urbanization is one. Serious historical U.S. flood events now seldom used to calibrate models date back to the 1920s. Cities were a lot smaller then compared to modern metropolises, and flood defenses in place today were not as widespread. Green areas and natural features have been replaced by concrete and asphalt. 

These hard surfaces absorb much less of the total precipitation, run-off waters that cause flood damage. Constructed drainage helps, but its effectiveness is not equivalent to that of natural features. Run-off over a smooth surface is also much faster than over grass or forest. More run-off at higher velocities equals more damage.

As stated earlier, last year saw the introduction of the first generation of U.S. flood models. It is not clear to what extent developers accounted for urbanization and increasing wetness during their initial development, but these changes should be considered as model evolution continues. Indeed, recent events have set model revisions in motion.

Of the five U.S. flood vendor models available in 2018 – developed by AIR, KatRisk, Impact Forecasting, RMS and CoreLogic – only KatRisk and RMS included imbedded hurricane risks. CoreLogic has since added hurricane precipitation to its flood modeling tool, alongside an overhaul of the underlying assumptions it deploys. Precipitation from hurricanes is not a source of loss considered by the other two models. KatRisk, meanwhile, has already begun to review and update its model. For all the modelers, Hurricane Harvey has proved a new source of information – as have Florence and Michael.

Now that U.S. flood models exist in abundance, demands for more and better flood-related data will multiply.

The need for that storm experience cannot be overstated. The single greatest challenge to accurate calculation of the probable impact of floods, apart from changing weather patterns, is access to a sufficient volume of claims data. Inches of water are not equivalent to actual losses. Models need to take into account what is included and excluded from policy wordings, consider claims adjustment preferences and general market practices, and incorporate these findings into their measure of the way actual events impacted covered risks.

Unfortunately, very little claims data for events prior to 2017 is available. The 2018 hurricanes provide more, and will support model recalibration. Entities including the National Flood Insurance Program and the Federal Emergency Management Agency have data on residential property damage that dates back to the 1960s, but this information has not been made available publicly in a form sufficiently granular to help with the modelers’ task without a Freedom of Information Act request. There is some commercial risks claims data available in the direct and facultative market, but it’s not easily available to model developers.

The clamor for data

The factor that has limited the availability of relevant data is that floods have not been modeled widely by re/insurers, so these professionals have had no incentive to collect it. That has changed. Now that U.S. flood models exist in abundance, demands for more and better flood-related data will multiply.

Brokers and carriers will begin to model U.S. flood risk, especially in highly commercial classes such as Direct & Facultative Property. Today, most D&F books do not gather flood-specific data for modeling, while good-quality data is provided for wind and earthquake risk. Flood risk factors such as ground- and first-floor elevation and the distribution of values with height including basements are not significant for underwriting those hazards.

Even geo-location has been limited to a simple street address for most D&F insurers, which does not help flood modelers very much. A power plant, hotel or research facility may be set back significantly from the road and sited next to a river. Flood requires a high level of geo-location resolution for accurate modeling, since one or two feet of elevation can make all the difference. Demand for this enhanced risk data will increase rapidly, driven in part by reinsurers spooked by recent flood events. Insurers will have to begin to populate data fields previously left empty.

The demand for data will rise as flood models are released and improved. Both Ambiental Risk Analytics and JBA Risk Management have U.S. tools in development, so the market could soon have a choice of seven U.S. flood models. When earthquake models were first introduced in the early 1990s, it took several years to accumulate the data necessary to get them right. We can expect the refinement of flood modeling tools and techniques to be similarly drawn out, notwithstanding any real or perceived long-term changes to the hazard caused by climate change, urbanization or anything else that might cause floodwaters to rise.

Watch the world’s first comparison of U.S. inland flood risk modeling at argolimited.com/flood-model-showcase.