What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
French grocery giant Carrefour is deploying generative AI to assist customers with recipes and shopping lists:
“Carrefour déploie ChatGPT sur son site d’e-commerce” (Les Echos)
By asking the chatbot “do you know any good jokes?,” the researchers got ChatGPT to generate 1,008 jokes. However, more than 90% were the same 25 jokes, the researchers found, with the remainder being variations. […] While the bot was able to explain its humor, “it cannot yet confidently create intentionally funny original content,” according to the paper by Sophie Jentzsch and Kristian Kersting titled “ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models.”
On the one hand: This adds up. Remember that a generative AI picks up on patterns and replays those when you ask it to create something. Most of what it generates reflects a twist on those patterns and, therefore, a twist on the training data.
On the other hand: Sticking to the same old jokes is (perhaps, unintentionally) the safest thing a generative model can do.
Good post here from Elvis Dieguez. Let’s all remember that the “growth at all costs” approach includes the potential for … all costs. Including the cost of product failure.
(Related to this article in The Atlantic: “The Instant Pot Failed Because It Was A Good Product.”)
I’ve mentioned before that automation – especially AI-based automation – is well-suited for tasks that people find dull, repetitive, and predictable.
That said, even if AI frees up people on the service side of an interaction, there are still human beings on the customer side. This automation can make for an unpleasant experience if we’re not careful.