AI Expanding Underwriting Capabilities - Argo Group

Underwriting involves processing massive amounts of data, but artificial intelligence can make that process more accurate and efficient. Argo recently partnered with DeepCurrent, an AI startup that uses cloud-based neural networks to automate data entry.

Andy Breen, SVP of Digital at Argo Group, sat down with DeepCurrent co-founder and CEO Joel Luxenberg to discuss the companies’ collaboration. Listen to their conversation to hear more about the impact of artificial intelligence on the insurance industry.

Narrator: Welcome to The Future of Insurance, the podcast that looks at technology, innovation and the evolution of the insurance industry. The Future of Insurance is presented by Argo Group, a specialty insurance company that helps businesses stay in business.

Gordon Bass: Today we’re talking about artificial intelligence and what it means for the insurance industry. Joining me today from New York is Andy Breen, senior vice president of digital at Argo Group, and from Los Angeles, Joel Luxenberg, who’s co-founder and CEO of DeepCurrent, an artificial intelligence startup that uses cloud-based neural networks to automate data entry. Argo and DeepCurrent have recently partnered to work on what Andy calls the original data business, because underwriting requires gathering a huge amount of data to calculate the probability of a loss.

Andy, what’s the challenge of all this data for insurers?

Andy Breen: Historically that was, “Here’s an application. Please fill this in or give me these pieces of information, and I’ll go away and try to run some models or do some kind of assessment to see whether or not that was done.” The insurance industry has never really agreed upon much of a standard format. There’s a few things out there, but really things come in as a pile of documents – and at least we’ve moved from paper documents and faxes into electronic documents. But we’re still getting a lot of things that are basically PDFs, or equivalent, to try to assess risk on.

As you can imagine, that’s a fairly laborious process for humans to basically then process it and put it into an appropriate model or system to assess. There’s errors that arise from that, because people doing that might type something in wrong, or just make some sort of error doing this all the time.

Gordon Bass: This is where DeepCurrent comes in. Joel, define artificial intelligence and talk about how you’re using it to address this massive influx of data.

Joel Luxenberg : As far as we’re concerned, in a computer science kind of realm, artificial intelligence is any system that we can build that can see the data or perceive its environment that comes in, and then make a decision based off of some training, some learning. We kind of look at it as a three-step process here at DeepCurrent, where first, we can augment that process of data entry, so when the actual clerks and professionals work with our platform to improve their efficiency – accuracy, like Andy was talking about. The next part is the actual machine learning, which is the fact that the more that’s processed in our system, the more accurately and intelligently our neural networks will learn that particular document and the specific anomalies, and be able to properly predict and correct information that comes in.

The last piece is that eventual feature decay and autonomous process, which is where we’re all going with this, which is when the neural networks have hit some sort of a critical mass and we can actually go from supervised learning into true deep learning, that will inevitably allow us to decay the features and remove those screens and have autonomous processing in the backend using intelligence systems.

Gordon Bass: Tell us, what’s unique about the way that DeepCurrent approaches artificial intelligence?

Joel Luxenberg : We take an engineering approach, as opposed to a pure academic approach, in how we’re trying to solve these problems. Our goal, as I said, is to be that bridge to automation. We provide a platform with tools to augment the current process and have a supervised learning process where the domain experts, which are the people that Argo Group currently employs to be doing these things, actually use our tools to get rid of theirs and be able to process information in shorter timeframes and more robust datasets.

We use neural networks, which is kind of similar to the biological representation of how the brain works, where the data points are turned, I guess in layman’s terms, into these neurons that can light up and represent the information that needs to be identified within the dataset, the documents that we get. Then we use a machine learning process and a supervised learning process. We actually do both supervised and unsupervised learning, so that everything that they do in our system can be applied to their business.

Gordon Bass: Andy, there’s a lot of startups in the AI space. Why did Argo choose to partner specifically with DeepCurrent?

Andy Breen: Artificial intelligence, big data and the like are obviously the buzzwords of the day, and certainly there have been a myriad of startups, and certainly large companies as well, that are heavily investing in this area, and it is often very difficult to sort through who has a real solutions versus some vaporware and some pretty slides and who’s actually working on this.

One of the things that attracted us to DeepCurrent was, as Joel mentioned, is that they’re taking a very practical engineering approach to this. It’s not something that is just academic or with interesting things that produce white papers, right? It’s actually something that is really at the unsexy realm of what we do, extracting meaningful information out of unstructured documents and things like that, but it is so fundamental to our business, as Joel alluded to – having all that metadata available, not only for the immediate use of trying to help automate and speed and reduce errors in the process that we do today of tackling a submission or a claim, but actually having that metadata available because so much of our meaningful data, whether it’s internal data or external data that we can pull in, is in what I call unstructured textural data sources, so things that are just either narrative form or in some form that is just not an easy queryable database.

Being able to extract that and use it for other types of downstream analyses or other types of predictions is really, really important for us. Joel and the team, I think, are also practical in the sense that they didn’t come to us and say, “Hey, we’ve got the solution fully baked. Let’s just get some of your data traded up and we’re good to go.” They said, “Let’s work on this together,” and that was really attractive to us because we have our own data scientists as well who are looking at related and similar problems. We wanted to exchange that information and be able to work together on this, because this is very much in the early dawn of artificial intelligence.

We have a lot more that we don’t know than we do know, and so we’re going to figure this out together by putting some of the best minds and sharing information and be able to collaborate on this, because you’re going to be able to walk down one path that you think is really promising. You’re going to hit a dead end, and then you’re going to need to try five other things, so we understand that and we’re willing and able to work that way, and Joel and team at DeepCurrent were also willing to work that way, so it made it attractive, not just from a technological standpoint but also from a collaboration standpoint.

Then from a technical standpoint, from what I know about artificial intelligent approaches, they have a very smart way, which is to say, “Hey, we’re trying out a whole bunch of different neural net and other types of techniques around doing this type of thing, and for a given document one technique might work better than the other and so we’re just going to score it and say this one works the best here and we’ll apply it there.” We get documents, as I said, in a huge range of different forms and formats and variants. There might even be handwritten things in there as well, so to say that one single technique is going to solve all, for me just means that you probably don’t understand the problem that deeply or haven’t gone beyond scratching the surface. I knew that Joel and team had gone a lot deeper on it because of their approach.

Gordon Bass: Joel, in a company like Argo, what’s the relationship between the real people who are currently doing the work and artificial intelligence?

Joel Luxenberg: We view the humans as the domain experts, so while artificial intelligence – and even non-artificial intelligence, well, RPA, basic machine learning techniques –have been able to get organizations like Argo Group to 70 percent, 80 percent, of autonomous processes, those human beings are still required to bridge that gap. That’s what we see a very unique opportunity, is to practically apply these techniques and use it to learn based off of that human behavior, so we can get to a true cognitive system that is able to make decisions and not need to disrupt any organizational workflow.

Gordon Bass: Andy, what’s the state of artificial intelligence today and what do you see ahead?

Andy Breen: We’re a long ways away from what is called artificial general intelligence, which is where machines have all the capabilities of humans including sentiment analysis and motion, higher order analysis and thinking, those types of things. We’re still working on the micro-problems, right, as we talked about: fairly valuable but unsexy type of thing of extracting data out of documents. It’s actually – right now, it’s the state of things where it’s really inefficient in one way or another for humans to be doing this, so instead of having a bunch of people re-keying information from a document into a system – which, as we talked about, is costly but also leads to a lot of errors – this is something where these repetitive tasks and things like that can really be done a lot better by machines. But we have to have machines that are not just rule-based, because these documents can be, as I mentioned, highly unstructured, and so they have to be more probability-based. That’s where the challenge comes, in that we don’t know with these black box systems, we don’t know what’s going to come out of them. They’re not predictable in all cases and so you have to spend a lot of time just training them and getting them up.

The second part is how do you actually inform a human when the system says, “Well, I got this piece of information but I’m not 100 percent confident it’s correct. I need you to help supplement or tell me what it is so I can learn from that,” and still figure out what the experience and the interface is, because I think it’s kind of unprecedented in computing and user experience design how you basically indicate to a user that I have a confidence on a scale of zero to 100 and what that means for what their action is. You take humans from people just typing at keyboards and entering information to be more decision-makers, which is where we should be positioned relative to these systems today, but conveying the information properly so they can make the right decisions.

Gordon Bass: Andy, Joel, thanks so much for talking today, and we look forward to checking back with both of you in a year or so to see how things have evolved.

Narrator: You’ve been listening to The Future of Insurance from Argo Group. To learn how your business can leverage technology to transfer risk, go to

Up Next

A Dynamic Workforce Reshapes Insurance

Argo Group’s Tony Cicio discusses the impact of technology on insurance and the dynamic workforce required to usher the industry into the future.