Business Stakeholders: Three Questions to Improve Your Communications With Data Scientists

Posted by Q McCallum on 2021-09-27

When talking with your company’s data scientists, does the conversation quickly bog down? Try these questions to keep things moving.

In a previous post, I offered some tips for data scientists to improve their communication skills. Specficially, I shared four key questions data scientists should be able to answer when interacting with their company’s business stakeholders and department heads.

It’s not fair to place the entire burden on data scientists to bridge that gap. You, the business stakeholder, also need to improve your communications skills to interact with data scientists. And if you always have answers to these three questions, you’ll be in a good place:

  1. “What is ML/AI, really?”
  2. “What, specifically, is it that I want to accomplish?”
  3. “Why can’t you just give me a number?”

As a bonus: having answers to these questions will improve your discussions with ML/AI vendors. It’s harder to sell snake oil to a well-informed prospect.

1/ “What is ML/AI, really?”

I’ve seen this time and again: executives insist on using ML/AI in their company without understanding what it is and what it can really do for them. They’ve often based their definitions and expectations on very lightweight learning materials. They figure that having a rough idea is enough for them, and they leave the rest to their soon-to-be data science hires.

If you want your company to make the most of this new and powerful capability, it’s up to you to develop an understanding of what are the key pieces of ML/AI and how they interact. Lightweight articles about FAANG companies’ data successes probably won’t help you. Nor will vendor pitch decks.

Now, do you need to know enough to work as a data scientist yourself? While that would be useful, it’s hardly a requirement. At a high level, you should know:

  • The relevant technical terminology. You will hear the terms “training data,” “algorithm,” and “model” a lot, and you’ll need to know how they fit together. (The press often uses those last two interchangeably, which causes even more confusion… but that’s a story for another day.)
  • The terms “Big Data,” “data science,” “machine learning,” and “AI” do not have clear, industry-standard, widely-accepted definitions. Beware vendor pitch materials.
  • The difference between hard and soft numbers in your business.
  • The lifecycle of an ML/AI model.

Knowing that will help you to understand:

2/ “What, specifically, is it that I want to accomplish?”

The ML/AI world involves a lot of interesting tools and techniques: neural networks, A/B tests, recommendation systems, the whole lot. And, to be fair, a lot of this stuff is just plain fun to explore. Just imagine the possible applications…

That’s why it’s too easy to lock in on a tool and lose sight of the bigger picture. Remember to stay focused: you currently face a certain business challenge, and you are looking into AI in your quest for a solution.

Resist the temptation to approach your data scientists and product teams with specific, technical solutions in mind. You’re much better off explaining the challenge you face and what you want to achieve.

In doing so, you’ll show your team how much you trust their knowledge and abilities. Even if you’ve been reading up on AI techniques, what are the chances that you’re in a better position to evaluate a solution than your data scientists?

Sometimes that answer is “very high, because I come from an AI background and I’ve been testing (not just reading about) different techniques and technologies in my off-time.” Fair enough. You’ll still want to present your preferred solution as an idea. And you should do so after the team has thought through how to approach the problem. You hired them for their brainpower, right? If you do all of the thinking, they’ll start to question their purpose and their value. Which is usually the first step to them looking for a new job.

3/ “Why can’t you just give me a number?”

An ML/AI model is very good at returning a simple answer. So why does your data science team hesitate when you ask for a quick, specific answer to your question?

The reason is that the “simple answer” returned from a model or analysis is only part of the story. When your data scientists refuse to answer your question with a single number, they’re trying to tell you about some of the uncertainty inherent in a projection or prediction. Instead of cutting them off and repeating your question, give them the floor so they can explore the details with you.

Consider the case of predicting a number. This is called a regression, and the analysis or model returns a single number called a point estimate. This point estimate is what the model has determined to be the most likely answer. What you see on-screen is a simple “7.5” and that’s what you think you want … but what the model is really saying is: “7.5 is not the answer, but I think it’s in the range of possible answers.”

Your data scientists can provide more context on that range. Is it a narrow spread, which indicates that the model is fairly sure of itself? Or is it a very wide spread, in which case the point estimate is probably not reliable? And if it’s a wide range, why so? Maybe it’s due to insufficient data, or messy data, or something else entirely. Without that context, the point estimate doesn’t really tell you much and you shouldn’t rely on it.

(It’s also entirely possible that the model is completely wrong, and that the correct answer is not in the specified range. That kind of uncertainty is a harsh reality of ML/AI, but I won’t go into too much detail here.)

It’s important for you to understand a model’s result before you act on it.

Speaking of which, how will you act on that result? Even if the model provides a very narrow range around that point estimate – so, you’re reasonably confident that the point estimate is very close to the correct answer – it’s still just a number. The model does not understand your business context. Putting that number into context is your job.

As the person in charge, you cannot abdicate responsibility to the model. You hold the ultimate decision on what to do. Understanding and accepting the uncertainty inherent in any ML/AI exercise will help you when you make that decision.

It’s all communication

Even though ML/AI is a technical matter, most of the challenges in this field are rooted in person-to-person communication. When communication breaks down, frustration mounts and important messages get lost in the shuffle.

It’s not enough for one side to improve. As a field, we need company stakeholders and data scientists to learn to understand the others’ context. Remember: communication is a two-way street.