Weekly recap: 2024-01-07

Posted by Q McCallum on 2024-01-07

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2024/01/01: Data, lies, and watching the machines

I see two big takeaways here, as far as data:

1/ Data can be used to lie, yes. Data can also be used to surface the truth. It’s mostly a matter of who has access to that data and how much of an audience/reach they possess.

2/ To repeat my usual point about AI safety: “never let the machines run unattended.” Not even – or perhaps, “especially” – when the machine is a motor vehicle that claims to have “Autopilot.”

Elon Musk’s Big Lie About Tesla Is Finally Exposed” (Rolling Stone)

The only thing Tesla can do by software is constantly bombard drivers with warnings to remind them of the truth they have obscured for so long: you are actually in control here, pay attention, the system will not keep you safe.

2024/01/02: Making the most of AI in 2024

Welcome to 2024! AI is still here. It still holds the power to transform your business model and your products.

As such, it still represents a strong risk/reward tradeoff for your company.

If you’d like to make the most of AI – reduce your exposure to the downsides, while leaving your business open to the upsides – you can:

  • Follow me here on LinkedIn. I post a couple times a week on interesting news related to AI business models, AI strategy, and AI risks/safety. (There’s also the occasional post on finance and marketplaces.)
  • Follow my blog, where I occasionally write longer pieces on those same topics.
  • Retain my consulting services. I can help your company navigate this world of AI.

There’s no shortage of hype in AI but there’s also no shortage of opportunity. Use 2024 to make the most of what this technology has to offer.

2024/01/03: Learning what was below the surface

Longtime readers will remember that I’ve written about ad-supported streaming video before (O’Reilly Radar, August 2022; my blog, August 2023).

Last week I’d planned to write about Amazon Prime Video, as it had announced price increases and more ads. But I never get around to it. Then, I stumbled across this gem of an article:

It’s “shakeout” time as losses of Netflix rivals top $5 billion” (Ars Technica)

A few years ago, when all of these companies leapt into streaming/VOD, I thought to myself: “Why don’t they find ways to partner with Netflix, instead of trying to compete? Netflix has tons of experience in this area. Running a streaming platform isn’t just about throwing video content on some servers and charging subscriptions.”

As those services flourished, I figured I had missed something. “I mean, they do have tons of original content and an existing audience. And I’m a fan of skipping the middleman whenever possible. I guess Netflix is really on the ropes here. Hmm.”

Turns out … maybe not? Per the Ars Technica article:

“For much of the past four years, the entertainment industry spent money like drunken sailors to fight the first salvos of the streaming wars,” analyst Michael Nathanson wrote in November. “Now, we are finally starting to feel the hangover and the weight of the unpaid bar bill.”

For companies that have been trying to compete with Netflix, Nathanson added, “the shakeout has begun.”

[…]

“Netflix has pulled away,” says John Martin, co-founder of Pugilist Capital and former chief executive of Turner Broadcasting. For its rivals, he said, the question is “how do you create a viable streaming service with a viable business model? Because they’re not working.”

It’s time to dust off the Warren Buffett line, that “Only when the tide goes out do you discover who’s been swimming naked.” It looks like the other VOD services forgot to pack bathing suits.

That brings me back to my original point, that maybe these groups should have tried to partner with Netflix. There’s a general lesson about business models here:

I know the “iceberg” analogy is a little worn at this point. But that’s because it is so widely applicable. It’s very rare that you can mimic an incumbent’s success by simply copying the parts you can see from the outside.

(Every data professional reading this is nodding their head… thinking of the times they were asked, “hey can we just copy what FaceGoog is doing?”)

2024/01/04: There’s no substitute for doing your homework

The role of a startup founder is to sell investors, customers, and employees on the company’s future state. And when you’re joining or investing in that hot startup, it’s too tempting to just follow along with the founder’s positive energy. “It’ll be fine, right? Right??”

In reality, there’s no substitute for doing the homework to make sure that everything is actually pointed in the right direction. And to hold people accountable for things that aren’t working.

Case in point:

No Oversight: Inside a Boom-Time Start-Up Fraud and Its Unraveling” (New York Times)

While HeadSpin had raised $117 million from top tech investors […] it had no chief financial officer, had no human resources department and was never audited.

Mr. Lachwani used that lack of oversight to paint a rosier picture of HeadSpin’s growth. Even though its main investors knew the start-up’s financials were not accurate, according to Mr. Lachwani’s lawyers, they chose to invest anyway, eventually propelling HeadSpin to a $1.1 billion valuation in 2020. When the investors pushed Mr. Lachwani to add a chief financial officer and share more details about the company’s finances, he simply brushed them off.

While that article covers a mobile app startup, the lessons also apply to AI. How many companies out there claim to be doing AI, but aren’t really? How many have built AI-enabled products, but skipped out on industry best practices? How many are committing the AI equivalent of Headspin’s alleged practices?

If you’re interested in AI due diligence – either for your company, or for one you plan to invest in – please contact me.

An AI assessment is an opportunity to uncover lurking problems and missed opportunities while there’s still time to course-correct.

2024/01/05: There’s still work to do

If I were to tell you that a general-purpose LLM chatbot did a poor job diagnosing medical issues, you probably wouldn’t be surprised.

Still, it’s nice that someone got the numbers:

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate” (Ars Technica)

Does this mean that LLMs are useless? Far from it.

Does this provide more support for the idea of focused, domain-specific chatbots? Yes.

Do I think such a chatbot could replace a human specialist? Depends on the job, but, unlikely. I imagine such a system could support a person but is unlikely to replace them outright.

Sum total: Sorry, folks. Humans still have a lot of work to do. We don’t get a free, robot-supported vacation just yet.