Executive discomfort: AI and uncertainty
2025-09-04 | tags: risk AI data literacy
A 3-D rendering of question marks on a black background.   Most of the question marks are black, but two are glowing orange.  Photo by Laurin Steffens on Unsplash.

(Photo by Laurin Steffens on Unsplash)

Part of my consulting practice involves bringing executives up to speed on AI – what the term really means, what they should expect during a project, and what kinds of risks and opportunities are involved.

Through this work I've found something that makes the execs a little tense: it's AI's inherent uncertainty.

They're taken aback when I explain that the model they're planning might not work. Or that there's no way to 100% protect their chatbot against misuse. Or that their AI-driven product feature could backfire. They've spent the past couple of decades in a world of custom software that encodes firm business rules, so the idea of a system that is more probabilistic than deterministic throws them for a loop.

Why does AI inject so much uncertainty into a product? And how can you handle it?

To start, let's clear up some terminology:

The big question mark

The core idea of uncertainty is that the future is a question mark. It hasn't happened yet (otherwise it would be the past) so we can't say for sure what's coming.

That leads to two points about uncertainty:

1/ The future holds a number of possible outcomes. One decision, one point in time, could branch off in multiple directions. Somewhere in there is your desired outcome. But it's not the only path. One way to think of uncertainty is that there's always another possible outcome you hadn't considered.

2/ You cannot control the future. You can influence it. But in the end, your vote is one of many.

A CxO's job is to make decisions, and every decision is a bet on a future outcome. Injecting uncertainty into their decision calculus – replacing constants with variables, so to speak – makes that job a lot harder.

This goes double in Western business culture, where a can-do attitude (and maybe extra budget allocation) can sometimes push a lagging project over the finish line.

That explains uncertainty in general. Now let's consider how this applies to AI:

A machine's cloudy crystal ball

AI brings a particular flavor of uncertainty to product ideation, design, and development. Every such product centers on the model; yet you never know how well a model will perform until you've tested it.

Even then, those test results only reflect... well... the test. It's entirely possible that a model will pass internal tests with flying colors and later fall over when deployed to the real world.

The harsh reality of AI is that you can do everything "right" as far as choosing your source of training data, collecting it, and preparing it for analysis. You can hire the brightest minds and give them the best tools. You can pour tons of time and effort into an AI project, putting your reputation on the line in the process. You can do all of that, and the modeling project may still fail.

In turn, that uncertainty warps the product roadmap: An executive can't (safely) announce the Hot New AI-Based Product Feature until the data team has spent weeks, if not months, just trying to get it to work. And once the thing works, everyone's still on pins and needles because it could suffer a meltdown. In public. Without warning. As competitors watch from the sidelines.

(Case in point: Apple eating humble pie as it postpones AI-related Siri updates. Here's some coverage in Bloomberg and The Guardian if you missed that story.)

The path forward

Once you've decided to incorporate AI into your products, you are subject to all of AI's uncertainty. And the risk exposures that come with it.

Granted, you could plow ahead while pretending that AI is a sure thing. You could do that. Sheer luck will sometimes carry you to success! But you don't want to rely on luck. So what else can you do to leave yourself open to AI's gains, while reducing the chances that you get burned?

Accept. Then act.

First you have to accept that AI injects uncertainty into your product plans. No amount of hand-waving or good vibes can force a model to work properly.

Having done that, you can take action:

1/ Evaluate your projects. Suss out whether AI is even a good fit for what you're trying to do. And if so, try to gauge the chances of success along with the cost of failure.

2/ Set boundaries. Outline how much time, money, and effort you'll invest in a given project. And define the conditions under which you'll stop early.

3/ Prove it out, then announce it. You won't have to walk back a public statement that you never made in the first place.

4/ Keep an eye on it. Once the model is operating in the real world, establish monitors and failsafe switches to catch when things go awry.

Once again, there's no guarantee that your project will work out. But if you take these steps, you'll increase your chances of success and become more aware of what might go awry. That's your best bet for handling an uncertain future.

(Does your company need help navigating AI's uncertainty? I can help. Contact me to get started.)

Complex Machinery 044: Five light reads

The latest issue of Complex Machinery: Catching up on some end-of-summer AI news -- jugglers, thirst, risk, and more.

Complex Machinery 045: Tracing the Connections

The latest issue of Complex Machinery: The key lesson from complex systems: when everything's connected, large players can pull others up and also drag them down.