What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
A little Monday morning motivation for you executives, stakeholders, and product owners:
Your company doesn’t have to build a generative AI bot.
Yes, you can build one.
And yes, you probably should check around, to see what’s possible.
But if you do that homework, and realize that such an AI system won’t bring value … then … don’t build one!
Save yourself the time, effort, money, and potential damage to your brand’s reputation.
Focus on what will help your company move forward.
“Where did you get that training data?”
This is a question that you:
1/ want to sort out as early as possible, before you’ve built and deployed a model.
2/ don’t want to hear from your company’s legal department.
3/ definitely don’t want to hear from someone else’s legal department.
All of which leads us to this article:
Granted, this lawsuit may not happen. And who knows where the verdict (and any appeals) may go.
But this still provides a useful reminder: you really, really want to stick to training data that is expressly licensed for such use. Legal action just slows down your business and introduces unwanted uncertainty.
Given that, why not take this opportunity to review your company’s datasets? Are you part of the lucky crowd that only uses proprietary data that was generated from your systems and business processes? Or have you built models on data that may not be yours to use?
Bonus points if you’re able to trace data from its source, through any ETL pipelines, to a given model. If you had to remove specific records from a training data set, could you do it?
Has your company deployed an LLM-based chatbot, similar to ChatGPT? Congratulations! And also …
… when’s the last time you checked the outputs? This bot serves as your company’s virtual brand ambassador; are you comfortable with everything it’s saying on your behalf?
By now we’re all used to generative AI models emitting nonsense (“hallucinating”). And as an executive or product owner, it may be easy to write this off as a purely technical concern – something for your company’s data scientists to worry about.
But I would caution you to reconsider. Within the last week, travel operator Tui and tech industry heavyweight Microsoft were both bitten by their generative AI bots. These lower-level “technical” problems led to unwanted, embarrassing PR attention.
The take-away lesson? #AIRiskManagement should begin long before a single line of code is written, long before a model is deployed. It starts in the planning phases, when you map out the ways things may go awry and sort out how to address them.
Companies have sold us the myth that everything needs to be “personalized.”
Personalization isn’t necessarily a bad thing. That and recommendation engines can be quite helpful… when people want them The key is to allow people to choose.
So it’s nice to see TikTok offering an off-switch. Even if they’re only doing it to satisfy the EU DSA:
“TikTok Is Opening a Parallel Dimension in Europe” (The Atlantic)
What will be a key turning point in AI adoption? When companies stop talking about it, and start putting it to good use.
For now, it’s still the other way around:
“The joke out there was that all you had to do last quarter was say ‘AI’ and your stock would pop immediately,” said Bryant VanCronkhite, a senior portfolio manager at Allspring Global Investments, the $550bn asset manager.
“Some companies are saying they’re doing AI when they’re really just trying to figure out the basics of automation. The pretenders will be shown up for that at some point,” he said.
(Side note: AI is automation. Machines automate human motion. Software automates firm, predefined business rules. AI automates decisions/opinions.)