A very human problem
2025-07-22

A story's been making the rounds so you may already be aware of it. In case you need a catch-up: a company was experimenting with AI agents and, well, one AI agent reportedly deleted a production database.

A lot of people will point fingers at genAI agents here. While I'm hardly an AI cheerleader, this doesn't strike me as an AI problem. More of a very human problem.

That problem? A lack of proper risk controls.

Automated systems – whether based on AI, software, or machinery – do not know when they are operating out of their depth.

You may think you're "conversing" with an AI bot, but it doesn't really understand you, because it's not a sentient being. And in this case, the bot doesn't truly acknowledge its error. Deep down it is a set of patterns, expressed as mountains of linear algebra. That's it.

As such, it's up to you to create the conditions in which the bot can operate safely.

If you want to avoid similar incidents in your company, you don't have to give up on AI. You need to understand and tune your risk/reward tradeoff. To find that upside gain while reducing your downside exposures, you must:

1/ Develop the right use cases. Use AI where it's appropriate.

2/ Monitor and contain your AI systems. Build in padding and human oversight.

and those both stem from:

3/ Develop AI literacy. So you understand what's a suitable task for AI, and how to protect against what might go wrong.

And if you need help with any of those, reach out.

The bots aren't perfect

Remember that people are on the receiving end of your AI-driven automation.

Permission to be random

The value of a genAI bot in brainstorming sessions