What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
Some UK stores are adopting facial recognition technology to deter shoplifting. Among other things, I see this as a matter of both AI ethics and AI metrics:
"‘We’ll just keep an eye on her’: Inside Britain’s retail centres where facial recognition cameras now spy on shoplifters" (The Guardian)
We're already familiar with the AI ethics part: facial recognition technology has a poor track record.
Now, what about the AI metrics part? It's in this excerpt here:
[Facewatch founder Simon] Gordon argues that Facewatch has few downsides, stating that images of innocent shoppers are kept on the system for 14 days – “less than the 30 days of CCTV” – and that the current accuracy of its camera technology stands at 99.85%.
Misidentifications are rare, he added, and even then the implications are minor. “You’re not being thrown in prison – there’s no miscarriage of justice,” said Gordon.
Even if the metric of "99.85%" accuracy holds true – and I have my doubts – that number is still too low.
You might ask: "Wait, too low ??" Isn't 99.85% very close to 100%?"
It depends.
Metrics matter when it comes to AI. In particular, metrics matter in context.
And in this context, we need to consider the cost of a model's error: for every one million shoppers, 1,500 will be mis-identified by this system.
Facewatch may convince itself and its customers that this number is acceptable. But for any person who is mistakenly removed from a store, or simply tailed by store security as they browse, 99.85% is too low.
How long would it take you to lose $440M? For one company, the answer was "just under an hour."
Eleven years ago today, market-maker Knight Capital suffered a meltdown.This is a story of operational risk, a little bad luck, and the ways seemingly small issues can turn into big problems.
The lessons here apply to every company building AI, software, or other automated solutions:
"The origins of an incident: Knight Capital"
https://qethanm.cc/2023/08/01/the-origins-of-an-incident-knight-capital/
"Shall we watch sports? Or something about data analysis?"
"Yes."
"The Excel World Championship esport is coming back to ESPN this week" (The Verge)
(Triple points for the Dodgeball reference, by the by.)
The origins of an incident: Knight Capital
Reflections on operational risk, in light of the anniversary of the Knight Capital meltdown
Grocery bots and chlorine cocktails
Lessons from an AI chatbot's terrible recipe ideas.