Ten traps for your company's AI transformation
2026-01-16 | tags: AI data literacy
A sign that reads 'Danger: steep terrain and cliffs beyond this point'. Photo by Greg Rosenke on Unsplash.

(Photo by Greg Rosenke on Unsplash)

For my first blog post of 2026, I'm sharing a list of ten traps you may encounter on your company's AI journey.

What do I mean by "traps"? These are ideas that seem reasonable on the surface, but ultimately lead to trouble. They're unintentional acts of self-sabotage that keep you from reaching your full AI potential. They probably drag on your balance sheet, too…

Read on to see how leaders accidentally fall into these traps, and what to do about it.

Trap 1: Skipping data fundamentals

FALLING INTO THE TRAP: Companies that are eager to get into genAI will skip over other data-related opportunities in BI/reporting, data science, and ML. They also skip much-needed data foundation work.

Going straight to AI means missing out on the wins that these other technologies can provide, and building a weaker foundation for your AI efforts.

WHAT TO DO ABOUT IT: While genAI makes all the headlines these days, BI and ML still hold plenty of promise. (And if you write this off with "we tried those and they all failed," then I have some bad news about how your genAI will turn out ...)

Getting into AI means first getting your house in order:

Once you've exhausted what those can do for you, _then_ it's time to explore genAI.

Trap 2: Skipping AI literacy

FALLING INTO THE TRAP: Execs insist on using AI, but they want to move forward on a surface-level understanding.

This seems to work out at first, but falls apart as expectations meet reality.

There's a lot of friction when the technical teams explain that the execs' ideas are not possible.

WHAT TO DO ABOUT IT: If a company is to be successful with AI, company leadership needs to develop AI literacy – knowledge of AI, appropriate for their level and role in the business. With this knowledge, they will be prepared to:

Without this knowledge, it's too easy to get an incorrect picture of what "good" and "success" look like. And that's how you wind up very far from "good" and "success."

Trap 3: Winging it

FALLING INTO THE TRAP: The company is in such a hurry to "do AI" that they dive straight in on running projects and connecting AI to business processes.

This feels great at first, because there's the excitement and the buzz of activity. But over time you realize that you haven't really moved the needle. Or implementation of AI in one business unit is causing problems in another. Overall this burns time, money, effort, and morale.

WHAT TO DO ABOUT IT: You're better off developing plans:

At a high level, you need an AI strategy. (This isn't a mission statement or rallying cry. It's a specification of what AI projects the company will undertake, and why.)

From there, you'll need detailed plans for those projects, including impact analyses and risk assessments.

If you have a plan, you'll be able to tell how things are going and how close you are to your goal. You'll know what success looks like, which means you'll know what failure looks like … which also means you'll know when to change course and when to stop.

Trap 4: Over-delegating (going completely hands-off)

FALLING INTO THE TRAP: I've seen this since the early days of Big Data and data science. After the CEO issues the bold, sweeping declarations about adopting some new technology, they leave everything to a technology-specific hire.

They essentially say "OK we have our AI people" and walk away, waiting for the magic to happen.

Later on, they wonder why there's no progress.

WHAT TO DO ABOUT IT: No matter how much you trust the CDO/CAIO/Head of AI, you also want to play a role in how AI is used in your company. Making AI work is a joint effort.

This goes back to yesterday's point about developing an AI strategy. You, the CEO, have deep knowledge of the company's business model and challenges. The CDO/CAIO has deep knowledge of all things data and AI. When you put your heads together you can figure out what uses of AI actually make sense for the company.

This isn't just a one-time deal, either. You can't participate in the strategy exercise and walk away. If AI is to become an integral part of your company, you need to stay up to date on what's going on.

Trap 5: Under-delegating (going too hands-on)

FALLING INTO THE TRAP: While some execs aren't involved enough in the company's AI work, others are _too_ involved.

I've mostly seen this with leaders who come from a technical background. They have deep experience in writing code and deploying apps (good!) but that leads them to focus on technical minutiae way too early (not so good).

For example, sometimes a CTO will ask for my take on some specific tool or technique … but their company hasn't figured out how it will use AI. Which means it's not clear what technology is even needed.

WHAT TO DO ABOUT IT: It's great to learn about the technology you plan to deploy. Some people are tactile learners and need to build something from the ground-up in order to understand it. Or they need to get very deep into the specifics to develop their confidence. That's great! And if that's your style, I wouldn't discourage you from taking that route.

I would, however, propose that you focus on strategic matters first and _then_ explore the deeper technical aspects. As an executive, the company can't afford for you to get lost in the weeds.

Trap 6: Only thinking of the upside

FALLING INTO THE TRAP: Companies sometimes focus on AI's potential upside gains, like cutting headcount or improving productivity, without thinking about potential downsides.

This looks great on paper: the cost of the AI system is a fraction of the incumbent solution! What could possibly go wrong?

Later on these companies get bitten by issues like model errors, malicious actors, and random weirdness. The ensuing cleanup effort suffers because it happens under time pressure.

WHAT TO DO ABOUT IT: As you map out what might work out well, spend the time to also map out what might go wrong. Then figure out how you'll handle it.

Whatever's going to happen will happen. So why not prepare?

You can start by performing a risk assessment of your system, then testing the hell out of it. Red-team it. Try to break it. Feed it weird inputs and see what happens. And if you find a problem, you have to actually fix it.

The best example of this was in December, when Anthropic turned a newsroom of WSJ journalists loose on its AI-powered vending machine. The newsroom tricked the bot left, right, and center.

Thankfully, Anthropic installed the vending machine as a test. They specifically wanted the journalists to take a swing at it. So the bigger lesson from this exercise is that the red-teamers (the journalists) were not in the Anthropic org chart! They had carte blanche to be their most devious selves and to report problems, with zero fear of reprisals.

Trap 7: Only thinking of the downside

FALLING INTO THE TRAP: genAI has suffered some spectacular failures and the list keeps growing. Some execs therefore assume the entire AI space is snake oil. So they decide to sit on the sidelines and never even try to use the technology.

The problem? Since they never try, they never see what AI could do for them.

WHAT TO DO ABOUT IT: This is about AI's inherent risk/reward tradeoff: On the one hand, if your company never uses AI then it can't fall victim to an AI failure. On the other hand, if your company never uses AI, you'll never have a shot at the upsides AI has to offer!

Your best bet here is to take the time to explore your AI opportunities. I mean really explore them, where you develop AI literacy (as noted in Trap 2) and make a plan (from Trap 3).

It's possible that this exploration won't turn up any meaningful use cases. That's disappointing, but at least you know. You'll no longer have to _guess_ either AI could have helped.

Trap 8: Keeping your technical teams in the dark

FALLING INTO THE TRAP: Sometimes the exec team and/or product owners will:

  1. dream up an entire AI-backed product
  2. promise it to stakeholders or customers
  3. tell the AI team to build it

It's only at step 3 that the execs finally hear the bad news: their AI idea won't work. And now everyone has to scramble – the AI team assembles some subpar substitute, or the sales team dodges questions from prospects, or both.

WHAT TO DO ABOUT IT: If you only value your data scientists and ML engineers when they are cranking out models … you're doing it wrong.

These people are your company's data experts! If you want to succeed in data science/ML/AI, you want to:

This is how you catch AI-related problems _before_ you release the product, and when you still have a chance to course-correct.

Trap 9: Expecting software-like predictability

FALLING INTO THE TRAP: Companies with a lot of experience building software lull themselves into a false sense of security with AI.

"AI is just code," they say. "And we're great with code!"

They further assume that they can predict project deadlines, and then they further assume that the model will behave as expected.

They encounter nasty surprises when the model exhibits poor performance, or when the chatbot goes off the rails in a public setting. To make matters worse, these companies are now in a scramble to make things work because they announced these AI-powered features long before anything was ready.

WHAT TO DO ABOUT IT: If your company runs a top-notch software development shop, you're accustomed to being able to announce features before they've been built and otherwise make long-range plans. That's because you know your team, you know what they can deliver, and you trust them to be able to deliver by a certain date.

This same behavior will cause you no end of headache with AI.

You never know how a model will perform until you've built it, and you won't know how it'll react to changes in the real world until they happen. That means you can't promise a certain level of performance by a given date. And you can't make the model completely predictable.

A safer approach would be to:

As I often say: Never let the model run unattended!

Trap 10: Employing AI when it's not fit for the job

FALLING INTO THE TRAP: AI is often positioned as a substitute for human workers. Call centers and other customer service roles are at the top of AI's list, with some consultancy and advisory work running a close second.

This leads some executives to make bold declarations about AI replacing jobs, then learning the hard way that the AI wasn't nearly capable enough. Not only does the company suffer when the AI fails, it may also have to re-hire people to clean up after the bot's mess.

WHAT TO DO ABOUT IT: Remember: AI doesn't replace _jobs._ At best, it replaces _tasks._

When you're looking for AI use cases, then, asking "can AI replace this team?" is the wrong question.

The right question requires that you think on a more granular level: "why don't we ask this team what are their most dull, repetitive tasks? We can then determine which would be most suitable for automation. That will free up the people to tackle work that actually requires their industry experience."

… and so many more

There are far more traps out there, to be sure. Some of them are industry-specific, or otherwise niche. These ten should cover the most common traps you'll find when bringing AI into a company.

None of these are quick, magic-fairy-dust matters. They'll take hard work and discipline to address. But it's the kind of work that pays dividends – you can expect a smoother AI transformation, reduced downside risk exposure, and increased upside opportunity.

For more details, I encourage you to check out my latest book: Twin Wolves: Balancing risk and reward to make the most of AI. This is a short, executive-level guide on approaching AI with a mindset of risk-taking and risk management.

Complex Machinery 053: Amateur hour

The latest issue of Complex Machinery: The genAI hype wave has become a multi-trillion-dollar open mic night.

Complex Machinery 054: The cracks begin to show

The latest issue of Complex Machinery: genAI companies have been stress-testing their products in the real world. It doesn't always turn out well.