(Photo by Aaron Burden on Unsplash)
It's that time again: I'm sharing a list of pieces I really enjoyed writing over the past twelve months.
I did a lot of writing this year. At least, it feels like I wrote a lot more than I usually do. And that made it difficult to choose my favorites.
The list below is a mix of articles, books, blog posts, and newsletter issues.
In no particular order:
Both finance and the data field (data science, ML/AI, genAI) involve analyzing data for fun and profit. As finance has a long head start, it holds plenty of lessons that other data-related fields can borrow.
One slice of finance history is particularly relevant for companies adopting AI: Wall Street's shift from open-outcry (pit) trading to computerized operations.
I'd written about that topic here and there over the years, but this piece gave me the opportunity to put it all in one place. "Taming the Delightful Chaos" ran on O'Reilly Radar and also in my newsletter, Complex Machinery.
In early November I released my latest book: Twin Wolves: Balancing risk and reward to make the most of AI. This is a slim, executive-level read on how to approach AI (both ML/AI and genAI) in your company, while steering clear of needless trouble.
The secret ingredient is, per my usual, risk: seeing your company's AI transformation through the lens of risk-taking and risk management. Think of it as your unfair advantage in the sea of AI hype.
For more details you can check the release announcement on LinkedIn or head over to the book's website.
Every issue of my main newsletter, Complex Machinery, ends with a section called "In Other News." It's a quick list of articles I found interesting or useful, but didn't include in any of the main segments.
In November I spun "In Other News" out of Complex Machinery and into its own newsletter. It's as simple as it comes: a bullet-list of one-liner descriptions with links to the source articles. That's it. Perfect for someone on-the-go who wants a weekly, curated basket of interesting reads.
In Other News runs on a slightly wider remit than Complex Machinery's "risk, AI, and related matters." Get it in your inbox Wednesday mornings.
https://inothernews.complex-machinery.com/
I wrote this newsletter in January 2025 as a brief explainer of a disconnect in the genAI space: companies that sell genAI think in terms of future payoff, while their customers need present-day functionality. That difference in perspective leads to a lot of friction.
At the time I didn't realize how often I would cite this piece. The future/present split has been a constant theme in genAI and I don't see that changing anytime soon.
https://newsletter.complex-machinery.com/archive/027-a-difference-of-time/
This was one of those blog posts that had been on my to-do list for a while. Nine years, in fact. A social media post by Frank Wiles jogged my memory, so I dusted off my outline and finally wrote it all down.
For a technical matter, this was a surprisingly non-technical (or, at least, non-code) read. This post is all about how to think through a synthetic data effort.
It was also interesting to see how the high-level concepts had held up to the test of time. The arrival of genAI added a new tool to the toolbox, but didn't add any new direction to my old outline.
https://qethanm.cc/2025/12/02/creating-synthetic-data/
"Risk" is one of those terms that has a lot of different meanings. The definition changes based on the context of the discussion.
I talk about risk a lot, and I'd originally intended this as a short explainer piece I could point to as-needed. It then blossomed into a list of definitions, many of which were sourced from my favorite risk authors.
https://qethanm.cc/2025/07/31/risk-uncertainty-and-risk-management/
If your company collects data – and let's face it, that's pretty much every company these days – you'd do well to document the data you collect, how you store it, and how you delete it.
This post goes into more detail on the what, why, and how of developing a data retention policy. In short: get ready to spend some quality time with various department heads, and with your legal counsel.
https://qethanm.cc/2025/10/10/developing-a-data-retention-policy/
So many companies are diving into AI these days and many of them get in the way of their own success. As I note in the article's subtitle: "The Things You Don't Know Will Definitely Hurt You."
That's where I see parallels to the Dot-Com software development boom. This article explores the hard-learned lessons of that era, and how they apply to today's AI-eager companies.
Bonus: there are lessons here specifically for well-run software shops. As it turns out, those groups are positioned for particularly painful mistakes in their AI adoption precisely because they do application development so well.
https://www.oreilly.com/radar/congratulations-you-are-now-an-ai-company/
Believe it or not, I actually released two books in 2025.
A few years ago I wrote an extended blog post series on lessons ML/AI teams could learn from algorithmic ("algo," "electronic," "quant") trading. A friend encouraged me to package those up into a single read. I thought it was a great idea but I had to shelve the project due to time constraints.
In 2025 I finally got around to cleaning up and updating the material, then formatting it for release. The result was Black Box Tactics: Use algorithmic trading techniques to improve your AI.
I was very pleased with the result! The one catch? Black Box Tactics landed as I was in the run-up to releasing Twin Wolves, so marketing efforts for the former took the back seat. As evidenced by the fact that most of you hadn't heard of the book till now …
https://readblackboxtactics.com/
People often ask me:
Technically we can't answer the first question. While genAI is showing all kinds of warning signs, you can only declare a bubble after the fact. (Till that point, it's indistinguishable from an extended bull run.)
I explored the second question in this issue of my newsletter. Using past bubbles as my guide, and with a nod to Russia's Wild Nineties ("Лихие девяностые"), I walked through what kinds of artifacts a genAI crash might leave behind and how they might be used.
https://newsletter.complex-machinery.com/archive/047-whats-left-after-it-all-falls-apart/
Both Complex Machinery and In Other News will keep going. There's still time to subscribe so new issues land in your inbox.
I also have a number of articles in the works, plus some blog drafts (including a few that I wrote but didn't release this year), and some new projects. Stay tuned.
Restaurant markets
Google Maps as a middleman of the dining scene.
When lack of AI is a strength
A popular advert was made without AI.