What you don't know
2025-09-03

This article notes an inverse correlation between AI knowledge and AI experimentation.

This aligns with my experience as a longtime AI practitioner. Those of us who spent the pre-LLM years building predictive ML models and GANs see genAI in a different light:

We have a clearer view of what is/isn't possible with this technology. As such, we can filter out many unsuitable use cases with pen and paper – no need to build out a proof-of-concept only to find that it won't work. We can also identify and navigate around the pointy parts of the suitable use cases, to improve the chances of a given project working out.

When you flip this around, this is one reason why so many corporate genAI adoptions fail: the widespread (often: haphazard) experimentation is the slow, grinding, expensive, and riskier way to determine where AI will help you and where it won't. If you really like your sledgehammer, you're less likely to look for a door.

To be clear: the slower, tougher road is fine for personal exploration! Some people are experiential learners and that's great. But in a business context, there's much more at stake. You're better off developing AI literacy and working with an experienced professional if you want to protect your investment and reputation.

(I just happen to be one of those experienced AI professionals. Reach out if your company needs help sorting out use cases.)

A different kind of thirst trap

A reminder to build safeguards for your AI-based system.

Changing the subject

Meta chatbots will now avoid discussing certain topics with teens.