Weekly recap: 2023-07-02

Posted by Q McCallum on 2023-07-02

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/06/26: Borrowing ideas from finance

Are you trying to surface use cases for software- or AI-based automation? See what ideas you can borrow from the financial sector.

My usual go-to is the 1990s shift from open-outcry/pit/floor trading to electronic trading.

For a different flavor, check out this Odd Lots episode on the 18th-century Bank of England and London Stock Exchange:

Odd Lots: This Is How Finance and Banking Worked Before Computers” (Bloomberg)

All of the human labor, interactions, and paper made for a slow-moving and inefficient (in multiple meanings of the term) experience. Consider that the BOE ran 24x7 back then. It was only open to the public during normal business hours, though! The staff spent the rest of the time on back-office matters like updating ledgers and account maintenance.

The introduction of basic electronic records-keeping eliminated much of the paper and human labor. Computers brought about faster settlement times, better record-keeping, and greater market transparency. This was a solid use case for automation that we take for granted today.

The take-away:

Tech-driven automation works very well on tasks that are all of: dull, repetitive, and predictable.

  • Hard, predefined business rules? Software is your friend.
  • Soft rules based on pattern-matching? Go for AI.
  • Requires a lot of experience and nuance? This is the perfect spot for people.

2023/06/27: Risk management for AI chatbots

AI chatbots can unlock efficiencies in companies and open up revenue streams. They also bring new twists to age-old problems. Risk and reward go hand-in-hand.

My latest O’Reilly Radar piece covers how to handle the downside risk of deploying AI chatbots so you can leave yourself open to the upside gain:

Risk Management for AI Chatbots

(Many thanks to Chris Butler and Michael S. Manley for their help improving early drafts of this article.)

2023/06/28: The AI bubble?

Image of a colorful bubble on a dark background. Image by 20230629-flowchart-kelly-sikkema-lFtttcsx5Vk-unsplash.jpg

(Photo by Lanju Fotografie on Unsplash)

These days I hear a lot of complaints about AI hype and a growing AI bubble. As a long-time practitioner, here’s my take:

  • Is there a lot of hype? Yes.
  • Is AI in a bubble? It’s sure showing some signs of that.
  • Is AI the only case of hype-laden corporate FOMO the world has ever seen? Not a chance.

A brief look at financial history will reveal bubbles aplenty, dating back centuries. (The term “bubble” even comes from the South Sea mania of the 1700s.) This isn’t an AI problem; it’s a human nature problem. People love to believe in magic, and hype-driven bandwagons sure look like magic to most.

So what can you do about today’s AI madness?

Knowledge is the answer. Nothing pushes away hype quite like facts. It’s just too hard to hang unrealistic hopes on something when you understand what it can and cannot do.

In the AI space, we call this knowledge “data literacy.”

So how do you – the executives, stakeholders, and product owners of the world – develop your data literacy? How do you improve your company’s chances of developing AI products that actually work and are genuinely useful?

There are plenty of sources out there. I’m partial to my blog’s “data literacy” and “risk” tags.

My site’s not paywalled, so you can read the 70+ articles right now. Have at it!

2023/06/29: Process over outcome

An artist's hands, holding a pen, over a drawing of a flowchart.  Photo by Kelly Sikkema on Unsplash

(Photo by Kelly Sikkema on Unsplash)

A common phrase in the investments arena is to value “process over outcome.” In other words: you can’t guarantee a win every time, but you can position yourself to make the most of the wins when they come along.

**The same holds true for AI. **

No amount of money, hope, and enthusiasm will force an AI project to work out.

Your best bet is to build a solid foundation (data infrastructure, data collection, access policies) and formalize the entire project lifecycle (from ideation, to training, to deployment).

You still won’t win every time. But if you develop the process and express the discipline to stick with it, you will:

  • do a better job rejecting the ideas most likely to fail
  • iterate faster on the ideas that are likely to pan out
  • course-correct on good-ideas-about-to-go-bad

This will shorten your time to market and improve efficiency in how you spend your AI budget.

For some ideas on process in AI projects, check out my old blog post “The Lifecycle of an ML/AI Model.”

2023/06/30: Automation eats work for lunch

Automation eats work. It’s a story as old as time. This latest round of AI is leading a lot of people to ask how it will impact the workplace.

In the short run: some people will prosper, some will make lateral shifts, some will feel pain.

In the long run, though? People will take on new work.

Rise of the robots raises a big question: what will workers do?” (The Guardian)