As a business leader, achieving success in AI will likely require that you learn some things and unlearn others.
I talk about the “learning” side a lot. That mostly involves understanding how AI works and what it can(not) do. You’ll need this knowledge in order to set realistic goals and communicate effectively with the data team.
I’ve only hinted at the “unlearning” side before, so I’ll make it explicit here: as an executive, stakeholder, or product owner, achieving success in AI will require that you let go of certainty.
To put this in perspective, you’ve probably asked the following questions of your company’s data scientists before:
- “How much training data will we need?”
- “How will the model perform in training?”
- “How will the new model perform on real-world data tomorrow? Next week? Next year?”
The honest, plain answer to all of those questions is: “We won’t know till we know.”
When data scientists give that answer, they’re not dodging your questions. They’re not being lazy. They’re simply telling you a hard fact about how AI works.
I get that this is not an easy pill to swallow. Western business leadership culture – especially the American variety – is built on bold statements and can-do energy. We’re taught to always say “yes” and then work like hell to make things happen.
And sometimes that works! We often see this with software projects. Developer teams will put in overtime and create “technical debt” by implementing sub-standard solutions, all for the sake of meeting a deadline. This works because software development is a (mostly) deterministic pursuit.
AI is a different story. AI is probabilistic. You can’t provide definitive answers to most AI questions because they’re all about the future. Yet, that kind of certainty only exists in the past tense: after you’ve trained the model, after you’ve deployed it to production, after you’ve watched it operate on real-world data. No amount of money, staffing, or upbeat mission statements can force a working model into existence.
This is the harsh truth about AI. You can’t get all of the potential upside of automated decision-making without the potential downside that it might fail to work.
So what do you do? How do you move forward?
Well, the answer I listed above is incomplete. The full answer is:
“We won’t know till we know. In order to move forward, we’ll need to establish policies and procedures that allow us to test things and develop confidence in the model. Even then, once it’s in production, we’ll have to keep an eye on it because we may encounter unexpected conditions or corner cases.”
That’s it. You can’t make the uncertain certain; but you can define guard rails to steer you clear of the worst outcomes.