Weekly recap: 2023-10-15
2023-10-15 | tags: weekly recap

What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/10/09: Not just self-checkout

This is an article about self-checkout systems, but the lessons equally apply to other areas of technology. Including AI.

"Retailers appear to be facing a self-checkout reckoning" (Insider)

Now, retailers including Costco, Walmart, and Kroger are rethinking some of their self-checkout strategies. Some are finding they still need employees to combat theft, assist with purchases, review IDs, and check receipts.

Sound familiar? We still need people to double-check outputs from generative AI, to intervene when a fraud-detection model has mistakenly flagged a legitimate purchase, and so on.

Something to keep in mind as your company explores AI-based solutions. More often than not, you'll still need some people involved to work alongside the machines.

2023/10/10: Keep up the pace

The world keeps changing, so a business needs to adapt in order to survive.

In recent memory, we have:

What is your business doing to evaluate and adapt to changing market conditions? How will you use your core strengths to extend into new markets, and what new skills will you have to develop?

2023/10/11: Three truths

Here are two general truths about data science/ML/AI:

1/ More data is better.

2/ Mixing datasets is how you surface the really interesting insights.

But there's also a third truth:

3/ Just because you have access to a dataset doesn't mean you should use it.

In the rush to demolish internal data silos (to satisfy points 1 and 2) companies often neglect to consider matters of data ethics and risk management (point 3).

A recent article has reminded me of these truths:

"Should Walmart be data-mining your Ozempic prescriptions?" (The Verge)

I see this as another case where data science can borrow a page from finance:

Banks establish firm boundaries between departments that shouldn't share information. For example, someone who works in M&A shouldn't share details with someone who works on the trading floor, because news about an upcoming merger might give the trader an unfair advantage over the wider market.

Do you follow similar practices for data flows in your company? For example, do data scientists have complete access to every dataset? Or do different departments limit the data that leaves their area? And how do you document those flows?

2023/10/13: A voice from the inside

Remember a couple years ago, when an internal Facebook memo that said the company's behavioral targeting was “almost all crap?”

I didn't find the statement itself shocking – like many experienced AI practitioners, I already had a hunch – but it was interesting to see the idea acknowledged in clear, unambiguous terms by someone with inside knowledge.

Welcome to the Bard version:

"Even Google Insiders Are Questioning Bard AI Chatbot’s Usefulness" (Bloomberg)

If you've seen enough of my posts here, you already know where I'm going next:

Laugh at Google if you must. But I expect a number of other companies are in the same boat. And if your CEO has decreed that you're going to shoehorn generative AI into all of your products and services, that number could include you!

So if you don't want to wind up in the news because your latest generative AI products don't work as advertised:

1/ Take the time to uncover actual business use cases for this technology.

2/ Give it a thorough test.

3/ Avoid publicizing your generative AI products until you've taken care of 1 and 2.

Weekly recap: 2023-10-08

random thoughts and articles from the past week

What LLM chatbots teach us about AI in general

Wider AI truths, as surfaced by LLM failures