Does your company have a deep bench of ML/AI projects to try? You can look to the stock market for guidance on how to group and prioritize them.
Let’s say you wanted to buy some shares of a company’s stock. This is a situation where you go in with the hope of making money, but you also know it’s entirely possible that you could lose some or all of your investment.
What’s your first step? You’d probably do some amount of homework on the company before rushing to a broker. Maybe even research other companies in that same sector. You’d develop clearer picture on whether and how much to invest, which would in turn help you avoid a loss.
What holds for a financial instrument also holds for an ML/AI project: your money, time, effort, and reputation are on the line if it doesn’t work out. Doing your homework will guide you on how to how and when to take action.
Reviewing your projects
The first step is to review each project during the planning phase. Is this something with a stable, predictable outcome? New BI reports, a dashboard, or an update to an existing ML/AI model would fall into that category. A new predictive modeling effort, by comparison, would be more experimental. As would trying out an algorithm your team hasn’t used before.
Digging deeper, you can compare the project’s expected cost to its intended outcome. What’s the payoff if it works? How much effort will it take to get it off the ground? Given your team and your financial resources, how does your expected time-to market align with the window of opportunity?
When calculating the cost, you’ll also want to explore the various dimensions of risk. What are the chances that the project won’t reach successful completion? And how will you detect that when it’s underway? A project failure could weigh on your team’s morale in addition to your budget, so keep that in mind.
What about the endogenous risk factors, which stem from the project itself? For example: what’s the cost of the model being wrong? (This technical matter can quickly evolve into a PR issue. Just ask Twitter about that ML/AI model it built to crop photos.)
Exogenous risk factors also weigh in here. What if the world changes in a way that causes the model to misbehave in production? What if you lose access to your primary upstream data source?
(For more details on calculating an ML/AI project cost, check out my piece on TCM: Total Cost of Model. )
Having reviewed your upcoming ML/AI projects, you should now have a better understanding of cost, benefit, and potential for success for each one.
Taking the portfolio approach
Going back to our stock example, it’s unlikely that you’d buy shares of just one company’s stock. You’d do your homework on several companies, across a variety of fields, to develop a portfolio of investments.
You can similarly devise a portfolio of ML/AI projects. This would be a group of projects, some subset of your road map and wish list, that you’d run at the same time. Developing the portfolio involves sorting out which projects fit together, based on your constraints of money, time, team capacity, and reputation.
Defining the portfolio needn’t be a complicated affair. This can be as simple as moving sticky notes around a board (one sticky note per project) to see how different mixes of projects would size up.
Just like a portfolio of stocks, you want to balance that portfolio according to risk. Some projects are less likely to work out than others. A portfolio that’s weighted heavily on those riskier projects therefore faces a stronger chance of multiple failures. Besides the financial loss, this puts your entire ML/AI operation at risk: company leadership may no longer wish to fund it if they see it as a drain on the buget.
Then again, a portfolio overloaded with stable, predictable projects is not employing ML/AI to its full potential. You need to take a chance once in a while to have a shot at a big win.
Another element of the balance is how much reputational capital you can afford. When you have a lot of money on-hand, several successful projects behind you, and a leadership team that’s hungry for ML/AI, you can build a riskier portfolio. This is the time to experiment, because the wave of your recent successes will carry you through newer projects that don’t pan out. When you’ve had a rough year, though, your project portfolio should be more tame in order to rebuild those reserves of trust.
Watch for shared risks
The projects’ shared, systemic risks also have an influence in portfolio construction. When several projects share the same underlying problems, that increases the chances that they’ll fail at the same time and for the same reason. Several projects that are individually tame and predictable may therefore become high-risk when run concurrently.
Going back to our stock market analogy, let’s say you’ve built a portfolio of airline stocks. Is this a balanced portfolio? Writ small, yes: you’re protected against problems in a single airline (a corporate scandal or labor strike) because that won’t impact the other companies’ stock prices. Writ large, no: you’re very much exposed to wider problems that affect multiple airlines or even the entire sector. A single spike in fuel costs, malfunctions of a given type of aircraft, or a pandemic that sidelines travel will cause all of those share prices to drop at once, thereby wrecking your stock portfolio.
In the ML/AI world, shared risks include: relying on source data that comes from a single external vendor; working on multiple problems for the same business unit; or using a toolset that is new to your team (such as neural networks) on several concurrent projects.
Position yourself for a win
You hope that all of the ML/AI projects you undertake will turn out well. They can also fail, which will cost you time, money, morale, and reputation. You can shield yourself from some of that risk by reviewing individual projects early on and then running a balanced portfolio of projects that have different costs and failure modes.
All of this relies on you being able to properly evaluate each project for its pitfalls as well as its potential. You must be honest with yourself, work in a company culture that supports people raising potential problems, and have a good eye for what can go wrong. Combined, these will protect you from downside loss from ML/AI work while still leaving you open to achieve the upside gain.