Bad bot interactions
2025-06-13

This is an article by NYT's Kashmir Hill on genAI bots leading people down dangerous paths. There's a lot to unpack here for companies that develop genAI-based products, so I'll focus on three here:

1/ Always check the bot's outputs.

Remember that ML/AI models -- from so-called "classical" ML to today's genAI systems -- have a very limited understanding of the world. The word "understanding" even feels like a stretch, when you consider that it's really about patterns a machine surfaced in a pile of training data.

Models can and will emit problematic outputs now and then. And that's fine. What gets you in trouble is when you put those outputs in the public eye without checking them for errors or other problems.

2/ A complication with item #1 is when the machine is interacting directly with the end-user.

In those cases the end-user is the only human in the loop, and sometimes that end-user is not the best person to evaluate the outputs.

3/ When you red-team your app, think beyond the short-term, joyriding bad actor. Consider the longer-term interactions with someone who is in a vulnerable state.

These groups represent different kinds of trouble and therefore merit different approaches.

Joyriding bad actors? Your security team knows what to do.

Vulnerable end-users? Your product team needs to act.

(In both cases: legal, PR, and risk management need to be involved.)

AI strategy: Asking the right questions

How to approach your company's AI plans

GenAI-based search fails again

Your periodic reminder that genAI chatbots are a poor substitute for search