Reducing Risk in Building ML/AI Models
2021-11-01 | tags: AI risk

Best practices to balance the risk and reward of building predictive models.

There are no guarantees of success in training an ML/AI model. Even if you do all of the "right" things, it's possible to invest time, effort, and money in the R&D phase and wind up with a dud.

Still, despite all the risks, there are plenty of rewards waiting for you if you're able to build a model that performs well. You don't want to leave that win on the table, so what do you do?

While the costs associated with building a model are effectively unbounded, you can proactively narrow the scope of that spend. In this post I'll offer some thoughts on how to reduce your exposure to the R&D-related risks of predictive models.

(All of this assumes that you have already managed the strategic, ethical, and reputation risks associated with using a model. I'll explore those in a future post.)

Perform a cost/benefit analysis

Before tasking your data scientists with building a model, take the time to work through a key question:

What, specifically, do we want this model to do?

Put the technology aside for a moment, and focus on your desired outcome. "I want to predict next year's sales." "I want to classify thousands of documents per day." And so on. Explore how such a model could help your business. If it were to work, would the model bring in more money or reduce losses somewhere? Would it make your business more efficient?

How much time and money would you be willing to invest in order to have a chance at having that model? Make note of that detail and use it to proactively draw boundaries around the project's budget.

Compare this to the alternatives

Now that you have your desired end-state in mind, ask how else you could get there.

Perhaps you can buy the info, prepackaged, from someone else? Better still, has someone else built already this model and they provide SaaS-like access to it via API? (In this case, you'll want to weigh the pros and cons of sending your data to a third party. But using a third-party service still an option to consider.)

Can you approximate this through other, simpler means? For text data, maybe you can get far using word counts. For numeric data, how well does a linear regression perform? You can sometimes shoehorn a classification problem into a set of fixed business rules, expressed in software. These approaches have the added benefits of being easier to implement and easier to inspect, due to their transparency.

Once you've sorted out your alternatives, you can compare the costs to that of building a custom ML/AI model.

Define "good enough"

If you've decided to build your own model, keep in mind that every model is wrong some of the time. Even if it performs well during its R&D testing, it may still encounter trouble when operating in the wild.

The next question you'll want to explore, then, is:

How good is good enough?

If a classifier is correct 75% of the time, will that suffice? Before you answer that, remember that not all wrong answers equal. If the 25% of predictions the model gets wrong are the most important predictions, then is the model really 75% correct? or is it closer to 0% correct?

Similarly, in a regression problem, would it hurt you if the model's prediction is off by 5%? 10%? Maybe even 2% off is too far, depending on how much money is on the line.

These are the kinds of details you want to sort out and document ahead of time. You can use these to evaluate model R&D and decide whether to invest more money, or to stop early.

Prioritize your techniques

Building a model may require that your data scientists try a variety of algorithms, techniques, and tuning parameters. You certainly don't have infinite time and budget, so they'll have to prioritize the different approaches, optimizing for both "low required effort" and "high expected model performance."

I usually suggest starting with the less-fancy techniques. They are fast to try and low-cost. Further, a simple technique can serve as a baseline against which to judge the performance and cost of other approaches: "Sure, the neural network performs appreciably better than the linear regression, but it also comes with additional costs for training, maintenance, and deployment."

As a bonus, starting with a simple approach gives you the opportunity to learn more about your data. That will come in handy if and when you need to break out the heavy-duty gear later on.

Prioritizing techniques is a mix of art and science. So long as your data scientists know the data very well, and they have experience with the techniques, they'll be able to chart a course.

Try the sprint approach

Another way to control spending on model R&D is to break up the effort into a series of sprints. Set a duration -- say, "two weeks" -- to try one of the techniques from your prioritized list. At the end of each two-week sprint you can compare the results to previous efforts.

Besides adding structure to the R&D process, the sprint format prevents you from going on autopilot by creating logical stopping points. These are opportunities to take a wider view of the modeling effort as a whole, revisit or modify direction, and evaluate progress. And from a budget perspective, the stopping points are the time decide whether to continue.

Adding structure to reduce risk

There is no guarantee of success when developing an ML/AI model. Nor is there a clear stopping point to the R&D effort. It's very much a matter of "you'll know when you know." It's therefore up to you to manage the process before it burns a hole in your budget.

The tips I've offered here will help you structure and control your spend, thereby reducing the risk of building a model while still leaving you open for the potential upside gain.

Human/AI Interaction: Exoskeletons, Sidekicks, and Blinking Lights

Spotting opportunities to build AI systems that complement, not outright replace, people on the job.

New DSS Podcast episode: Uncertainty in AI Product Management

I interviewed product manager Chris Butler about the role uncertainty plays in AI product management.