By Alex Wright
This article was republished with permission from Risk & Insurance.
A greater validation of inland flooding data is required to improve the accuracy of U.S. flood models.
That was one of the key conclusions from presenters at the first comparison of U.S. inland flood risk modeling of its kind hosted by Ariel Re at Lloyd’s of London in November. The event, led by Dr. Federico Waisman, senior vice president, head of analytics, Ariel Re, showcased the U.S. flood models of four leading vendors: AIR, CoreLogic, KatRisk and Impact Forecasting.
RMS pulled out of the presentation because their model was not ready on time.
Compared to their earthquake and hurricane counterparts, flood models for U.S. risks are still in their relative infancy, relying largely on limited National Flood Insurance Program (NFIP) data. But in the wake of the devastation of thousands of homes and businesses caused by the effects of Hurricanes Harvey, Irma and Maria, the need for better flood modeling has arguably never been greater.
“Having more validation data would always be helpful,” said KatRisk’s chief technology officer and co-founder, Stefan Eppert.
“While in the U.S. we have got very good hazard data and excellent organization of data, on the loss side it would be nice to have generally agreed standards.”
Cagdas Kafali, senior vice president, research and modeling, AIR Worldwide, said there also needs to be more focus on commercial data sets.
Federico Waisman, senior vice president, head of analytics, Ariel Re
“There is some residential data available, but it’s also going to be important to validate these models’ vulnerability when it comes to the commercial risks,” he said. “The problem is that there is a lot of engineering assumptions in the absence of actual claims data.
“It’s also very difficult to get to peril-specific and coverage-specific claims when it comes to multi-location policies with sublimits. Hopefully that data will be available in the future and that will help to enhance the models.”
Aon Benfield Impact Forecasting’s head of research and development, Siamak Daneshvaran, said that a bigger problem is capturing the different types of flooding risk.
“Flood risk is spread all over the country,” he said.
“It’s not only in river bank areas; you can have flood in pluvial regions that might be outside of the flood plain maps that FEMA [Federal Emergency Management Agency] is producing.
“Therefore, the models need to laser in and generate more events to define the correct flood plain maps. Also, to capture events like Hurricane Harvey, where there was a large loss in downtown Houston, we really need to understand pluvial processes, drainage and all of those issues.”
In terms of available claims data, all four models draw on NFIP data to varying degrees, the panel surmised.
CoreLogic and KatRisk both use a combination of NFIP and company claims data, while Impact Forecasting supplements that with Aon’s own data to calibrate its model.
AIR’s Kafali, however, warned that when using NFIP data, its own limits shouldn’t be used as replacement values.