What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
We've seen this story play out before, haven't we?
"How Life Insurance Agents Beat Back a Tech Onslaught" (WSJ)
This article's subtitle is: "A decade ago tech startups thought they could eliminate life-insurance agents. The agents won the battle, and now the startups are courting them."
But it could just as easily have been: "Tech firms learn, once again, that code doesn't solve every problem."
Code solves a lot of problems, sure! My rule of thumb is to ask whether a problem is all of "dull, repetitive, and predictable." If so, then tech is probably a good fit. If not – if the situation calls for nuance, or requires a human touch – tech will likely fall short.
There's a wider lesson here, too. It's that tech will augment more roles than it replaces. Especially when it's the "AI" flavor of tech. There's so much money to be made building tools to help professionals do their jobs, rather than trying to replace them outright…
(Photo by Clay Banks on Unsplash )
Executive data literacy is a key element of a company's AI risk management practice. Probably the most important element.
If the leadership team, product owners, and stakeholders all have a realistic picture of what AI really is and what it can do, then your company is better prepared to:
Data literacy lights the way.
(Photo by Tim Mossholder on Unsplash )
Are you having trouble hiring data scientists, machine learning engineers, and data engineers?
Consider what they'll work on, what skills they'll need, and whether you've created barriers.
My latest blog post goes into detail: "Three questions to improve your data hiring"
This excerpt highlights important risk management lessons from the SVB failure:
More than a year before the bank failed, outside watchdogs and some of the bank’s own advisers had identified the dangers lurking in the bank’s balance sheet. Yet none of them — not the rating agencies, nor the examiners from the US Federal Reserve, nor the outside consultants that SVB hired from BlackRock — was able to coax the bank’s management on to a safer path.
From: "Silicon Valley Bank: the multiple warnings that were missed" (FT)
The folks at HuggingFace have released their own chatbot, "HuggingChat," an open-source alternative to ChatGPT:
"Hugging Face : quand trois Français lancent leur alternative à ChatGPT" (Les Echos)
Here are two articles on the impact of generative AI in the workplace:
"Tech giants aren't just cutting thousands of jobs — they're making them extinct" (Insider)
"How will AI affect work? Americans think it will, just not theirs." (Vox)
The common theme is the surprise – outright shock, even – that generative AI (LLMs like ChatGPT) would be able to take someone's job.
On the one hand: I get it. Everything we've heard about LLMs boils down to: "it's OK, but you still need someone to tweak the outputs. It's not perfect. So it's not really replacing jobs as it is making them easier. "
On the other hand: I think the shock comes from the (mistaken) belief that white-collar office jobs were always safe from automation.
I've noted before that AI is a form of automation, and that automation eats work. While I doubt that AI-based automation is ready to take on a person's entire role, I do think that it can certainly tackle certain tasks. And the list of tasks is growing.
There's another angle to all of this, as well: unlike the introduction of industrial/agricultural automation, this time around the workers have access to the tools. Hence the recent rash of "ChatGPT does most of my job, here's now" articles.
It'll be interesting to see how this plays out over the long run.
Three questions to improve your data hiring
Having trouble filling those data roles?
Weekly recap: 2023-05-07
random thoughts and articles from the past week