What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
What Christina Brady 🐙 says here is definitely true for new full-time hires (especially in leadership roles)
Starting a new job is like being the character that joins the show in season 4.
A lot of unknown, a lot of “I’m missing the context on that”, and a ton of learning.
And it goes tenfold for consultants: you need to gather as much information as possible, as quickly as possible, because stakeholders rely on you to produce results in short order.
So per her post below, you may as well “embrace that you don’t know it all.”
You need to get comfortable asking people to bring you up to speed. “Could you give me more background on that?” “No, I don’t understand – how did the company arrive at this conclusion?“and so on.
If you need some help asking these questions, just remember that it’s a lot scarier to have to answer them.
“Man beats machine at Go in human victory over AI”
This article raises two main points for me. The first, most AI practitioners will immediately spot. The second one is more subtle.
- To a point that I (and many others) have raised: an ML/AI model only “knows” the world that it learned during its training. It is vulnerable to so-called “out-of-sample” data, because it doesn’t know how to handle something it hasn’t seen before.
- This victory reminds me of the early days of electronic (algo) trading: a human trader, up against a bot? No chance. A bot assisting a human trader against another bot? Better.
The next step? If algo trading is any indicator, we’ll see the rise of bot-vs-bot. At which point we’ll ignite the “AI speed wars.” What will be the AI equivalent of “parking your machines in the exchange datacenter?”
(For those not familiar with the subject: I enjoyed Flash Boys by Michael Lewis, but found that Scott Patterson’s Dark Pools painted a more detailed picture of the move from open-outcry/pit trading to electronic/algo trading. And I think there are a lot of lessons there for any field that’s expecting a steep increase in tech-driven, AI-driven automation.)
#ai #ml #algorithmictrading
Way back when, the field we currently know as ML/AI went by the name “data mining.” And that name was a big hint.
Mining data is like mining for gold:
There’s no guarantee you’ll find anything.
No matter how much data you collect, no matter how many talented data scientists you hire, no matter what tools you throw at the problem … it’s entirely possible that you’ll walk away empty-handed.
If your company is starting to use ML/AI invest accordingly:
Plan it out. Start small. Always have a plan B (and C, and D, and …). And beware any vendor who tells you that this is guaranteed to work.
Every emerging technology goes through its “so what’s this even good for?” phase.
Web3 – the umbrella term for blockchain, cryptocurrencies, NFTs, and metaverse – is no different.
(Though, admittedly, it may catch more heat because of all the cryptocurrency scandals…)
In this O’Reilly Radar article, I reflect on my last couple years’ worth of research to sort out: What could be the big use cases for web3? What applications that take it from “so what” to “we need that”? And which fields will drive this?
(Sign up for Block & Mortar, my weekly newsletter, for more of my thoughts on this space: https://blockandmortar.xyz/ )
My interaction with a subscription service reminded me of some lessons around data, risk, and business models.
I summed it up in an article, which I’ve since mirrored here as a separate blog post: “When your metrics are fooling you”
(This post originally appeared in French. I’ve provided a translation of my words below.)
Meilleure description de ChatGPT (et d’autres LLMs, et peut-être de l’IA en général …):
[Selon Jean-Noël Barrot] ChatGPT n’est qu’« un perroquet approximatif »
L’IA est très puissante, mais elle ne “pense” pas.
This is the best description of ChatGPT (and other LLMs, and maybe even AI in general …):
[According to Jean-Noël Barrot, who heads up digital transformation efforts for the French government,] ChatGPT is “pretty much a parrot.”
AI is very powerful, but it does not “think.”
Before anyone asks: yes, I dropped my usual “AI is just linear algebra with better marketing” joke. I will assume it was a hit with the audience.
A software development lesson, one I learned early in my career, still holds true today for AI.
I’ve mirrored this LinkedIn article to here as a blog post: “Congratulations, you’re now a data company”