What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
I’m still putting together those blog posts I promised on data hiring. In the meantime, I encourage you to check out Solomon Kahn ’s work on the other side of hiring: getting hired!
His “Breaking Into Data” will help you sort out that first data job.
To learn more about Solomon and his work on Delivery Layer, check out the podcast interview we released a few weeks back.
There’s a lot of talk these days about AI replacing human labor. It reminds me of the early days of electronic trading, in the 1990s:
The move from floor/pit to electronic trading brought about harsh change for some market participants. But machines could do the work faster, their mathematical precision assisted with decimalization (allowing quote spreads to shrink), and having them pull market data from centralized systems improved price transparency. Moving from “people doing the trading” to “people building machines to do the trading” ultimately made sense.
What I see now with a lot of AI-based automation does not make sense.
And the fault is not in the AI itself. It’s in the way companies insist on using AI when it’s not a good fit, just so they can reap cost savings. (These are mostly phantom “savings” because they’ll move line items for staff/salary off the balance sheet while hiding the line items related to customer frustration, PR fallout, and lawsuits. But I digress…)
- Das Bild replacing editorial staff with AI? Even when AI systems have been known to cook up nonsense? Bad. (As reported in The Guardian and in Der Spiegel)
- Telling experienced nurses they’ll need to explain why they’ve overridden an AI system? Very, very bad. (As reported in WSJ)
The takeaway lesson:
When you’re sorting out how to employ AI systems in your company, look beyond the perceived “savings” of cutting staff headcount. Understand the cost of the machines being wrong, then ask yourself whether that’s still an improvement over people. You’ll probably find out that the best solution lies somewhere in the middle: deploying AI to assist those people.
When it comes to building AI products, here are three simple rules for developing competitive advantage:
1/ Your data is everything.
2/ The software you use to build the models will not set you apart.
3/ Competitors may still be able to build an equivalent model without your data.
(That’s the quick version. For the full details, I wrote about this a couple years ago.)
Why do I bring this up now? Because, according to this WSJ piece, a lot of newer AI startups don’t have any data. That means they don’t have an AI product. End of story.
“We’ve seen lots of pitches from companies who may well be pursuing a brilliant application of AI, but they don’t have access to data that will give them the ability to build a powerful application, let alone proprietary data that will help them have competitive moats in their business,” said Brad Svrluga, co-founder and general partner of venture-capital firm Primary Venture Partners.
Today, having the right data is more critical than ever for success. Now that building the actual models has become somewhat commoditized, the real value is in the data, said Paul Tyma, CTO-in-residence of Bullpen Capital.
On the one hand: Letting actual AI companies guide lawmakers means that laws will be influenced by people who understand the technology. → That’s a plus.
On the other hand: Letting AI companies guide lawmakers is effectively self-regulation. → That’s a minus. A big minus.
The ideal scenario: If lawmakers themselves understood the technology, such that they could properly balance the desires of the companies and the needs of the people. → This is what we need.
The take-away lesson:
This also holds true inside companies. If executives, stakeholders, and product owners understand AI, they’ll be in a much better position to devise useful AI-based solutions and ward off snake-oil vendors.
“The companies that are leading the charge in the rapid development of [AI] systems are the same tech companies that have been called before Congress for antitrust violations, for violations of existing law or informational harms over the past decade,” said Sarah Myers West, the managing director of the AI Now Institute, a research organization studying the societal impacts of the technology. “They’re essentially being given a path to experiment in the wild with systems that we already know are capable of causing widespread harm to the public.”
Earlier this week I mentioned the potential for companies to develop Kafkaesque AI tools that alienate customers. The airline industry provides the latest example.
“Somehow, Airline Customer Service Is Getting Worse” (The Atlantic)
The long-standing truth is that companies don’t want to talk to you. First they didn’t want to do it in person, then they didn’t want to do it by phone, now they don’t want to do it online, and soon they won’t want to do it at all. It’s not personal—it just costs money. But hype-fueled AI products have yet to pick up the slack.