What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
Learning a new language isn't always easy. It takes a lot of practice to develop the muscle memory around grammar and vocabulary. You don't get there by working through the same, tired phrases week after week. You need fresh, somewhat random material to keep you on your toes and prepare you for real-world conversations.
That means an instructor needs to create an infinite number of grammatically-correct practice sentences. Hmm. Sounds like the perfect use case for generative AI:
"How Duolingo Uses AI to Create Lessons Faster" (Duolingo blog)
This kind of generative AI is such a natural fit for Duolingo (and any other language instruction program). And mixing the new Duolingo Max with their original "Birdbrain" model should make an especially powerful combination.
That said, I hope the exercises won't lose their zing and personality. Every language course or textbook has its wacky phrases. They help the material sink in better because they're so memorable (even if they do weird you out at times). Can we rely on AI to keep the coursework interesting?
I just don't get the stress over extinction-level events from AI:
"How elite schools like Stanford became fixated on the AI apocalypse" (Washington Post)
To be clear: I understand the desire to address potential problems, even if they seem fairly hazy or remote. That makes sense.
That said, chasing these far-off problems looks like the new "fun" or "hot" thing to do in AI while there are plenty of very-real-and-present-day AI problems out there.
Using AI to tackle real problems:
"People Hire Phone Bots to Torture Telemarketers" (WSJ)
I've done quite a bit of text analysis/NLP in my career. Getting computers to parse the nuance in human speech is difficult enough; tossing emoji into the mix adds a new dimension.
"Wall Street Regulators’ New Target: Emojis" (WSJ)
Firms generally have software in place to review electronic communications, but some say it remains difficult for software to read an emoji and to understand the meaning behind it.
There's the old saying: "a picture is worth a thousand words." So on the one hand, an emoji can pack a lot of meaning into a single character. That can really help NLP.
On the other hand, context is key. Local culture, inside jokes, and subterfuge can completely change the intended meaning of the emoji. That can confound an ML model because it lacks the necessary context. And when you're trying to parse messages for regulatory purposes …
Here's the latest example of Generative AI Writes A Story That's Full Of Errors:
"How an AI-written Star Wars story created chaos at Gizmodo" (Washington Post)
What I find especially troubling is the desire to push AI like this in the first place:
The irony that the turmoil was happening at Gizmodo, a publication dedicated to covering technology, was undeniable. On June 29, Merrill Brown, the editorial director of G/O Media, had cited the organization’s editorial mission as a reason to embrace AI. Because G/O Media owns several sites that cover technology, he wrote, it has a responsibility to “do all we can to develop AI initiatives relatively early in the evolution of the technology."
Running internal experiments with the new technology? Absolutely! That's how you learn what works.
Insisting on using the new technology? Maybe not. That kind of declaration makes it hard to back down when you don't find suitable use cases.
Unlearning
Success in AI requires that you learn some things and unlearn others.
Generative AI, APIs, and third-party risk
The risks and rewards of using vendor APIs for generative AI models