Forty-two years ago today, Stanislav Petrov saved the world by questioning his nuclear-warning alert system and then ignoring his instructions to launch a counter-attack against the US.
Years later, he explained why he questioned his system.
Loosely translated: "By definition, the computer is an idiot. You never know what will (cause it to) trigger a launch."
(Source: Freelance Bureau (FLB) 2004/04/27 "Тот, который не нажал")
As we deploy AI systems with increasing authority and autonomy, we'd do well to learn from Petrov's actions. We must:
1/ keep humans in the loop
2/ make sure those humans have the ability to question what the system tells them
3/ grant them the authority to override the system
(Credit where it's due: while reviewing my upcoming book (update: now released!), Brett Bernstein jogged my memory on the Stanislav Petrov story. I came across the article and quote while checking my work. Many thanks, Brett!)
More cleaning up after bots
Cleaning up after the genAI bot that replaced you
AI's debt wall
Revisiting an old newsletter, in light of recent AI headlines