How Artificial Intelligence, the Internet of Things and a Stream of Data Help Insurers and Others Assess Risk

In Austin, a panel hosted by Argo Digital discussed the proliferation of data, the rise of artificial intelligence, and how they can help assess risk and improve business.

Risk assessment has always been data-driven. Today there’s more data than ever, a constant stream generated by everything from the Internet of Things to drones and mobile tech. And all this data can improve the way we assess risk.

That was one of the key points discussed by a panel in Austin, Texas, on March 12. “How Data and Machine Learning/AI Affect Risk Transfer in the 21st Century” was hosted by Argo Digital and moderated by Jason Abbruzzese, business reporter at Mashable.

Data that’s more ‘intimate’

Speaking to attendees at Half Step, a venue in downtown Austin, Andy Breen, senior vice president for Argo Digital, pointed to a nearby set of stairs.

Just a few years ago, Breen said, assessing a slip-and-fall risk would have meant reviewing fixed data, such as the number of employees and how much revenue the business makes.

But, he asked, “is that really the best way to assess the risk of whether or not someone’s going to slip and fall on that staircase over there? I don’t think so. Now what we’re doing is deploying things like sensors and drones and other pieces of data and IoT types of things so we can actually get much more intimate types of data.”

It’s what panelist Andrew Bocskocsky, co-founder and CEO of Grata Data, referred to as “qualitative and descriptive” data rather than just quantitative. “A lot of the data that has been leveraged in the past has been quantitative information,” he said, “looking at numbers and financial statistics. But when you can leverage a whole host of new types of data that are in written form, that unleashes a whole new domain that we can explore.”

Yet in gathering ever more data, quality is key.

“More data definitely is good,” said panel member Sambit Sahu, an adjunct professor of computer science at Columbia University who was involved with the IBM Watson program. But, he cautioned, “there is always garbage in/garbage out.” In other words, you have to start with clean, non-ambiguous data. “If the data is ambiguous and not clean, then of course, it is going to lead to more confusion.”

Forging better relationships

The audience, which ranged from insurance leaders to startup entrepreneurs, took note when Breen and Sahu both noted that 90 percent of the world’s data has been created in the past five years. Yet with new tools, they continued, it’s becoming possible to analyze all of this data in a meaningful way.

That’s especially true in assessing risk for businesses. “It changes the conversation,” Breen said, and it means an insurer can do more than just provide coverage; an insurer can say, “I can actually help advise you and make you a better business and help your bottom line.”

People will still play a pivotal role

Although human biases have the potential to undermine data quality, don’t expect machine learning to replace human expertise anytime soon.

“I think the combination of the two is what’s powerful,” said Breen, recalling a time when a percentage in one of his books of business was misclassified. Yes, people make mistakes, but “the machine catches the errors. We were able to catch the human errors with the algorithms.” But even though they’re moving past quantitative analysis, machines still benefit from a human touch: “You still have to go in there and do the qualitative analysis,” Breen said. “That’s where the human part is quite good.”

This website uses cookies to deliver tailored content to you, collect anonymous statistics, and maintain login sessions. You may decline our use of cookies in your browser, however certain portions of this website may not function properly. Click here for our privacy policy and information.