Before, not after
2025-11-19

Have you seen this article from The Register? "AI-enabled toys teach kids about matches, knives, kink"

One point caught my eye:

The company behind one such toy says that it will now perform a "comprehensive internal safety audit”and "[work] with third-party experts to verify existing and new safety features in its AI toys."

The full excerpt:

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” the company’s marketing director Hugo Wu told us in an email. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

Wu added that FoloToy will be working with third-party experts to verify existing and new safety features in its AI toys.

“We appreciate researchers pointing out potential risks,” Wu added. “It helps us improve.”

On the one hand: yes, you want expert eyes on your product and you want to review your products for potential problems. Absolutely.

On the other hand: while I applaud FoloToy for owning up to this, I also note that this incident was entirely preventable.

What's the takeaway lesson for your company? Simple:

You want to perform this due diligence before it is released to the public and before flaws put your company in the spotlight.

Ideally, these checks would be embedded in every step of your product lifecycle – planning, R&D, deployment, and beyond.

It's easy to embed genAI into your product – you just need an API key and some code! – but it takes experience and expertise to do it well.

Would you like to navigate the risks in your proposed AI use cases and AI-backed products? Reach out. I can help.

For more guidance on avoiding AI-related mishaps, I recommend my latest book: Twin Wolves: Balancing risk and reward to make the most of AI.

This is a slim, leadership-level guide on approaching AI. It explains AI's inherent risk/reward tradeoff, and how to optimize that tradeoff in your company's products. That means uncovering more upside gain while protecting yourself from downside loss.

The pre-project due diligence I mentioned above?

That's the first rule in the chapter on R&D: "Perform a review of every AI project before it starts"

One false step…

Live demos are a tightrope act

Going industrial

Anthropic's new partnership with Sweden's IFS