The Lemonade Lesson

Posted by Q McCallum on 2021-05-31

Lemonade’s recent media spotlight is a cautionary tale for any company using ML/AI.

A young insurance startup named Lemonade recently attracted some media attention because of its AI. To clarify, this wasn’t attention of the positive sort.

(I caught the early part of this on Twitter, as it unfolded. If you missed that, this Vox piece provides the full story thus far.)

Amid the jeers on social media, it’s easy to think that this is just a story about one company. It’s so much more. This is really a cautionary tale for any company that uses ML/AI. How many of these companies are just one slip-up away from taking Lemonade’s place in the hot seat? And do they even realize it?

It started well …

Lemonade, like so many companies these days, employs AI in an attempt to automate decision-making. This makes perfect sense for a business. When you want the scale of computer-based automation, but the decisions are too fuzzy for fixed business rules, a predictive model is usually the way to go.

That said, in the spirit of No Free Lunches, a company that adopts an AI-based solution also takes on exposure to certain related risks. New tools and risk are a package deal; you can’t get the former without the latter.

When AI companies think about risk, they tend to focus on technical matters such as model risk. This is the idea that the model will be wrong some amount of the time. If the model is wrong often enough, or if it’s wrong in certain ways, a company can lose money and damage its reputation. Well-run companies therefore develop safeguards (say, regular checks of model outputs, and safety switches) to reduce their exposure to model risk.

So far, so good, right?

How to get into hot water with AI

It may be tempting for those companies to stop there. But to do so would mean to miss the lesson every AI company should be learning from the Lemonade story:

Lemonade’s negative publicity doesn’t stem from a technical problem with a model. Instead, Lemonade is taking heat because of the public’s understanding of how the company does (and does not) employ AI.

Based on what I’ve seen, I don’t exactly blame people for raising an eyebrow. Quoting the Vox article mentioned above:

Over a series of seven tweets, Lemonade claimed that it gathers more than 1,600 “data points” about its users — “100X more data than traditional insurance carriers,” the company claimed.

[…]

Lemonade then provided an example of how its AI “carefully analyzes” videos that it asks customers making claims to send in “for signs of fraud,” including “non-verbal cues.”

These days people are increasingly aware of the amount of data companies collect on them, and how they often (mis)use it. I’m not sure a company should be bragging about how much data they collect, especially when that data is used in sensitive situations.

Staying out of trouble

What’s the lesson to be learned here? What can other AI-based companies do to avoid this kind of unwanted PR storm?

I expect some people will head to the legal department, to tighten up the privacy policy and terms of service (TOS). Those legal agreements may stop people from suing you in a court of law, yes. But they won’t provide much help in the Court of Public Opinion.

For that, you need to think beyond your model to understand the full spectrum of potential AI problems. You can start with a risk assessment to spot the non-technical issues with how you’re using AI. For example:

  • Has your company been open and honest about how you’re using AI?
  • Would any of that be considered “creepy” or otherwise inappropriate?
  • What if people determine which features your models use, and where you got your training data? Would you still sleep easy?

Having identified those risks, you then have to figure out what to do about them. You may decide that the potential downside is too great, so you change course. Maybe you decide that the potential upside is worth it, so you stay the course. Your PR and legal teams may counsel you to tone down any mention of AI in the public press. (Careful with this option: if those same statements make it into investor materials, you can expect that they’ll eventually become public.) Or, you might go the other way and mitigate some risk by being more transparent with your customers about the specifics of the data you collect, and how you use AI in your company.

Which path should you take? I can’t say. There’s no one-size-fits-all right answer on how to handle the risks you identify. The core lesson, though, is that if you want to avoid a PR frestorm, you don’t want your risks to surprise you. You’re much better off proactively identifying those risks and deciding how to handle them.

(For more details, check out my post on model risk from my series on AI lessons learned from algorithmic trading and my series on a risk-based approach to data ethics. If you’re looking for something beyond these blog posts, I can also help your company evaluate its AI-related risk as part of an ML/AI assessment.)