Risk and complexity go hand-in-hand: complex systems are fertile ground for small, innocuous issues to collide and grow into larger incidents. We live in a world of complex systems so we can't simply avoid them. Then again, adding needless complexity is a great way to take on exposure to downside risks.
To understand, here's an excerpt from an O'Reilly Radar piece I wrote a couple years ago called "Structural Evolutions in Data."
It is still my favorite way to describe complex systems.
What makes a complex system troublesome isn’t the sheer number of connections. It’s not even that many of those connections are invisible because a person can’t see the entire system at once. The problem is that those hidden connections only become visible during a malfunction: a failure in Component B affects not only neighboring Components A and C, but also triggers disruptions in T and R. R’s issue is small on its own, but it has just led to an outsized impact in Φ and Σ.
(And if you just asked “wait, how did Greek letters get mixed up in this?” then … you get the point.)
AI's next name change?
Looking back on something I wrote four years ago.
Which way will it go?
People ask whether recent news indicates the end of the genAI wave.