Sharing is caring!

Today, Insurtechs Root Insurance and Mediaalpha went public. Root was able to set their IPO price above the original estimate of $27, which will net them about $725 million, plus another $500 million from 2 private investors for a total of $1.25B. WOW! They now have a market capitalization of $7B. However, the market was not kind to them. As I am writing this, they are trading below their strike price.

Mediaalpha, which some of you may not have heard of, is a technology platform partially owned by White Mountains Insurance that enables agents or carriers to bid for interested insurance buyers through predictive analytics and a relationship with websites aimed at insurance buyers. The easiest way I can describe them is that they are an alternative to buying Google Ads to receive insurance applications. Unlike Root, they are profitable and are likely to raise $176M with a market capitalization of $1.2B. The market responded much more positively to them. They are up nearly 50%.

So what is the common theme that I see between these two very different companies? Analytics. Both companies extol their analytics skills as the reason why they will outwit the competition. Today, the term analytics is synonymous with:

  • Artificial Intelligence (AI),
  • Machine Learning (ML),
  • And Natural Language Processing (NLP).

So what are they and where are they applicable in the insurance value chain? What insurtechs use these techniques? How can you ensure that they are used appropriately and do not raise regulatory concerns?

Regulators are concerned about AI

Why the urgency? Because the benefits of AI come with new and complex risks. It turns out that many of these analytical techniques are biased. For example, AI-driven facial recognition technologies misidentify nonwhite or female faces. AI-driven hiring software reinforced gender and racial prejudice. Some AI vendors even stopped selling the technology last June, such as IBM.

Governments and regulators have begun to take note, and the National Association of Insurance Commissioners (NAIC) has formed a special committee focused on race and insurance, and has adopted guiding principles that say AI be fair, ethical, accountable, and safe.

Insurers and rating and advisory organizations “should be responsible for the creation, implementation, and impact of any AI system, even if the impact is unintended.” NAIC’s Innovation and Technology Task Force.

Are you ready for AI?

A recent article in the Harvard Business Review suggests that companies should set up an ethics committee to oversee their AI initiatives.

“Establishing this level of ethical governance is critical to helping executives mitigate downside risks, because addressing AI bias can be extremely complex. Data scientists and software engineers have biases just like everyone else, and when they allow these biases to creep into algorithms or the data sets used to train them — however unintentionally — it can leave those subjected to the AI feeling like they have been treated unfairly. But eliminating bias to make fair decisions is not a straightforward equation.”

A separate article in the MITSloan Management Review suggests that Boards need a plan to oversee AI. Why? Because, there are many current statutes that apply to AI.

Recently released Federal Trade Commission guidelines acknowledge that some automated decisions are already governed by existing laws, such as the Fair Credit Reporting Act of 1970 and the Equal Credit Opportunity Act of 1974. Moreover, in 2017, the FTC and the Department of Justice jointly submitted a statement to the Organization for Economic Cooperation and Development analyzing the application of antitrust laws to algorithms and concluding that existing anti-collusion rules are sufficient to prosecute abuses.

Have you as an executive or your board started discussing how you could use AI knowingly or unknowingly?

Feel free to schedule a call with me to discuss where some potential blind spots might exist.

CB Insights AI 100

CB Insights tracks their AI 100. You can see the companies in the picture below. Quite a few of these companies are insurtechs or offer services to carriers.

Insurtechs AI Top 100
CB Insights AI Top 100

AI applications in insurance

Faster claims processing, improved fraud detection, machine-learning claims assessment, improved insurance precision, and targeted customer acquisition and service are just some advances that AI brings.

Here is a selection of three key areas where carriers and brokers are using artificial intelligence to reduce costs and increase value.

  1. Behavioral Policy Pricing: IoT sensors will provide personalized data to pricing platforms, allowing safer drivers to pay less for car insurance, and people with sensors in their homes to also pay less for insurance.
  2. Faster Claims Settlement: Online and mobile phones will create virtual claims adjusters, including the use of aerial photographs to make it more efficient to settle and pay claims after an accident, while reducing the likelihood of fraud.
  3. Customer Experience & Acquisition Targeting: AI enables a seamless automated shopping experience, for example through the use of chatbots that can harness a customer’s geographical and social data for personalized interactions.

A sample of a few Insurtechs using AI

I have highlighted a few examples of insurtechs that many carriers are partnering with. This list is not exhaustive. I am not claiming that any of these Insurtechs are creating bias, but I suggest that you, as a user of their services, have a responsibility to understand whether they bring bias into your operations.

  • Snapsheet uses AI to analyze photos to estimate damage and repair costs. Is there a potential bias toward damage or repair estimates because the algorithm has no control over potential geographical racial differences? Gender selection in the car or use of repair facilities?
  • Betterview uses Computer Vision to evaluate roofs, determine property characteristics and estimate the building size. Could Computer Vision insert bias based on the type of neighborhood?
  • CarpeData uses AI to scour the web and classify information for claims or business classifications. Will classes of minority businesses be under-represented because they are less present on social media?
  • Jacada uses AI to deliver better customer experiences, will the algorithm favor men over women and offer better service, recommendations or results?
  • Mediaalpha uses predictive modeling and AI algorithms to assess customer interactions and predict purchasing intentions based on age, income and gender. Could the algorithms introduce prejudice?
  • Boldpenguin bought the AI startup Riskgenius primarily because of its analytical prowess. Will AI introduce bias in the analysis of documents?

What you should do

At least I recommend the following:

  1. If you are not currently using or evaluating AI technologies, you should do so. There are many benefits that they can bring to your organization.
  2. Choose use cases based on your specific needs and desires. Just because a chatbot can handle conversations doesn’t mean your organization needs a chatbot.
  3. As you evaluate these new technologies, you need to include the assessment of the potential for model / algorithm bias in your assessment criteria.

Do not hesitate to reserve time with me to discuss this or any other topic.

About Insurtech Advisors
Insurtech Advisors assists regional insurance carriers and agencies in finding and partnering with Insurtechs enabling you to thrive and continue to meet the needs of your members and independent agents. We work closely with your team to identify opportunities and aspirations and then personally curate and introduce you to the best Insurtechs to pilot.

Sharing is caring!