What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
This is an article about self-checkout systems, but the lessons equally apply to other areas of technology. Including AI.
Now, retailers including Costco, Walmart, and Kroger are rethinking some of their self-checkout strategies. Some are finding they still need employees to combat theft, assist with purchases, review IDs, and check receipts.
Sound familiar? We still need people to double-check outputs from generative AI, to intervene when a fraud-detection model has mistakenly flagged a legitimate purchase, and so on.
Something to keep in mind as your company explores AI-based solutions. More often than not, you’ll still need some people involved to work alongside the machines.
The world keeps changing, so a business needs to adapt in order to survive.
In recent memory, we have:
- Rideshare service Uber, which had already expanded into food delivery, joins the package-return ecosystem: “Uber Introduces Package Return Service” (WSJ)
- French newspaper Le Monde uses AI to help it offer its articles in English, thereby growing its total addressable market (TAM): “ « Le Monde » parie sur l’étranger pour stimuler sa croissance” (Les Echos)
- Short-term home rental service Airbnb moves into longer-term rentals. How soon till they shift into full landlord-management tools? “Airbnb’s next focus appears to be long-term rentals” (Engadget)
What is your business doing to evaluate and adapt to changing market conditions? How will you use your core strengths to extend into new markets, and what new skills will you have to develop?
Here are two general truths about data science/ML/AI:
1/ More data is better.
2/ Mixing datasets is how you surface the really interesting insights.
But there’s also a third truth:
3/ Just because you have access to a dataset doesn’t mean you should use it.
In the rush to demolish internal data silos (to satisfy points 1 and 2) companies often neglect to consider matters of data ethics and risk management (point 3).
A recent article has reminded me of these truths:
I see this as another case where data science can borrow a page from finance:
Banks establish firm boundaries between departments that shouldn’t share information. For example, someone who works in M&A shouldn’t share details with someone who works on the trading floor, because news about an upcoming merger might give the trader an unfair advantage over the wider market.
Do you follow similar practices for data flows in your company? For example, do data scientists have complete access to every dataset? Or do different departments limit the data that leaves their area? And how do you document those flows?
Remember a couple years ago, when an internal Facebook memo that said the company’s behavioral targeting was “almost all crap?”
I didn’t find the statement itself shocking – like many experienced AI practitioners, I already had a hunch – but it was interesting to see the idea acknowledged in clear, unambiguous terms by someone with inside knowledge.
Welcome to the Bard version:
If you’ve seen enough of my posts here, you already know where I’m going next:
Laugh at Google if you must. But I expect a number of other companies are in the same boat. And if your CEO has decreed that you’re going to shoehorn generative AI into all of your products and services, that number could include you!
So if you don’t want to wind up in the news because your latest generative AI products don’t work as advertised:
1/ Take the time to uncover actual business use cases for this technology.
2/ Give it a thorough test.
3/ Avoid publicizing your generative AI products until you’ve taken care of 1 and 2.