AI as a rogue trader
2026-02-23
A metal box with the words 'SAFETY FIRST' painted on it.  The box is rusting in places and the paint is flaking.  Photo by Joe Dudeck on Unsplash

(Photo by Joe Dudeck on Unsplash)

On 23 February 1995, Barings Bank uncovered a mass of fraudulent trades. The £800M ($1.3B) losses drove it to bankruptcy three days later.

This is a story of a rogue trader and failed risk controls. It also bears a warning about agentic AI.

The trader, Nick Leeson, had racked up bad trades over a three-year period. Proper risk controls and reporting will usually catch bad trades long before they've metastasized into an existential threat. But Barings wasn't set up that way.

Instead, Leeson was able to cover his losses by falsifying records and shuffling money around. This only came to light when the big Kobe earthquake shifted markets against his positions.

Reflecting on the Barings disaster, it recently occurred to me:

genAI agents are poised to create the same threat as rogue traders.

How so? Because, in many cases, genAI agents will be granted broad authority under insufficient oversight.

Under those conditions they'll have ample scope to create large problems that will only be uncovered 1/ by accident and 2/ when it's too late.

How can you narrow your downside risk exposure? Preventative measures can offer some protection – strong risk controls, narrow scope of authority, and monitoring, to name a few.

Many companies are too excited about genAI and will skip these basic safety precautions. Hopefully yours will prove the exception!