(Photo by Piotr Chrobot on Unsplash)
Your periodic reminder:
An ML/AI model has no idea when it is operating out of its depth. It has no way of saying "this is new to me and I don't know what to do." It doesn't have nearly enough worldly context to adapt to an unfamiliar situation.
That doesn't mean the model is "bad" – it means that it has a narrow field of operation. It's up to you to build the padding and protection around the model, so it is safer to use in business systems and products.
That means you need to:
1/ understand where the model might go awry
2/ determine the impact of those situations
3/ define ways to detect problems, and then disconnect that model in the event of an emergency
If you take these steps, you'll leave yourself open to all that the model has to offer while also protecting your business from mishaps. This is a key element of AI risk management.
(For more details, check out this post I wrote a couple years ago. It's from a series on lessons the ML/AI field can learn from algorithmic trading: Data Lessons from the World of Algorithmic Trading (part 6): "Monitor Your Models")
Complex Machinery 015: The cookie connection
The latest issue of Complex Machinery: Cookies can tell us a lot about AI use cases. Especially the more questionable variety.
Do I still have my technical skills?
Have I gone post-technical?