Setting Expectations for ML/AI Projects

Posted by Q McCallum on 2020-09-17

Pete Skomoroch sparked an interesting discussion on Twitter when he asked:

People running AI/ML teams: what advice do you have on successfully managing up to CXOs and sideways to product/eng/legal/marketing?

In my response I explained:

It’s about setting expectations. I always explain to a consulting prospect:

1/ what’s really possible with ML/AI these days

2/ some ways things may deviate from their desired end-goal

They get a deeper sense of the TCO involved, which leads to realistic decisions.

I jokingly call this my “anti-sales” part of the conversation, because it reduces the chances of a deal closing. This is the opposite of Western business culture, wherein we’re supposed to paper over problems with with a smile and pretend that a positive attitude can erase risks. (My friend Dawn Xiana Moon describes this as “toxic positivity” and it can weigh heavily on the balance sheet. I’ll explore that some other time.)

I emphasize “sales” because I work as a consultant in the ML/AI space. My work centers on helping companies start, assess, and re-start their ML/AI efforts. I’m well aware that ML/AI can be an expensive outing for a company – even if things go according to plan, much more so if they don’t – so I leverage my industry experience to take some of that downside risk off the table.

That goes back to the Twitter exchange I had with Pete, and the steps I take when talking with a prospective client:

1/ Explain what’s really possible with ML/AI these days

Companies hear a lot of ML/AI marketing and that influences their expectations. By the time they’ve reached me, I’m often their first contact with a data professional, so I spend part of those early discussions brushing away the hype.

We start by reviewing their business model to make sure we’re on the same page about what this company truly does, what challenges it faces, and where it wants to be in the future. We then talk about their ideal outcome so we have a clear picture of what they’d like to achieve. Based on that, I can explain what ML/AI can really do to help them meet that goal.

All of this contributes to a new, shared understanding which helps the prospect to make an informed decision on whether and how to proceed. Sometimes I can point them to a different solution than what they had in mind. It’s possible that they can meet their needs without using ML/AI at all.

2/ Explain ways things may deviate from their desired end-goal

By this point we’ve carved out their intended path. The next step is to explore possible deviations from that path so they can understand the downside risks.

For example, let’s say a company has asked for my help in researching an ML/AI model. Here are just a few of the problems we may encounter:

  • The training data is incomplete, or it lacks the features with the right “signal strength” to make suitable predictions. This means the model performs so poorly in testing that it never makes it to production.
  • The training data (from the past) doesn’t align with the data that it sees in the field (from the future), which means performance will suffer in ways we didn’t see during model development.
  • As a special twist on the previous point, there’s a hidden feature. That means a feature we used coincidentally correlates with a key feature that is not explicitly present in the training data. The model performs very well during the test phase, but is completely confused in production (since that correlated feature is sporadically present and we’re unaware of its existence).

We can make a best guess on whether any of these will happen. But we won’t know for sure until we try. That means a company might pay me to attempt to build a performant model, with no guarantee that it pans out. That’s a gamble. And that’s precisely why I want companies to make a fully-informed decision before we begin working together. Gambling is fine, so long as you understand the odds.

“Isn’t this a way to lose money?”

Sometimes prospects scale back their efforts, or even decline to move forward, once they understand the broader picture. That’s fine by me. I’m happy that I’ve helped someone to make a good business decision.

Other prospects, having a full understanding of the risks, still want to push forward on the chance they reap the reward. It means I’m working with a client who is fully aware of what lies ahead. And that is exactly what I want.

Once in a rare while I meet with someone who pretends the risks are only remotely possible, or even impossible. That’s when I decline to move foward. As you can tell by now, what I don’t want is a consulting engagement that’s full of preventable surprises.

(And for the consultants reading this piece: this isn’t an engagement you want, either. You signed up to help someone try to improve their business. Why are you signing up to fight them over money?)

“Isn’t this giving away a secret?”

You now see why I’m so open with a prospective client about how things may go awry. You may ask whether this honesty is some kind of consulting “secret sauce.” Why am I just giving it away?

Well, I give away a lot. Most of this blog explains the realities of ML/AI. Previous posts have explored the risks inherent in model deployment and ML/AI business models, practical approaches to developing a data strategy, how to hire data scientists, why a company should start with BI before diving into ML/AI, and understanding the total cost of deploying an ML/AI model.

Frankly, what I’m giving away are best practices, which I hope more people in this field adopt.

Getting the full picture

I prefer my clients know what they’re getting into before we’ve inked a contract and started working together. And if we do hit a speed bump along the way, I want it to be within the realm of what they’ve already heard. I want everyone involved to be ready.

If you’re on the sell side of the relationship, consider what it would mean to let a prospect in on how things might not work out.

And if you’re on the buy side, the next time you speak with a consultant (or vendor, or anyone else with whom you’d sign a deal) … ask them for examples of what could go wrong. Their answers – or lack of answers – will tell you a lot.


Looking for an honest review of your company’s ML/AI efforts to-date? I provide ML/AI assessments for just this purpose. Suitable for general peace of mind as well as M&A scenarios.