I thoroughly enjoyed Kashmir Hill's New York Times piece on a "decision holiday" driven by genAI bots:
My favorite part: one bot acknowledged its limitations and bowed out. Other AI providers, please take note!
The lesson there is that genAI models don't know when they are operating out of their depth. Smart bot-makers build guardrails to protect the model (and their share price!) from its own naïveté.
(I covered this and more last year in "Risk Management for AI Chatbots" )
(Also, this all dovetails nicely with Friday's Complex Machinery newsletter. Copilot refuses to answer election-related queries and that is brilliant ).
The sweet spot for AI in the workplace
Finding the best (and worst) tasks for bots
Simple data is powerful
Some 'basic' analysis can make a big difference