As someone who works in AI and also studies languages ... I agree 100% with this:
Human children do not learn language by analyzing text corpora.
I am so sick of people saying "LLMs basically learn language the way we do."
This is so wrong. It is one of the great intellectual mistakes of our era.
(From Christopher Johnson on LinkedIn.)
I'd go as far as to question whether LLMs actually "learn" language, at all. The idea of picking up on probabilistic patterns of words (really, "of numeric representations of words") hardly makes someone/something suitable for conversation.
Still, it's tough to not anthropomorphize the bots. The verbs we use to describe their actions overlap with the verbs we use for people, and some of that meaning (unintentionally) carries over.
So when you say that a bot "says" or "believes" something, remember that those terms are convenient but they don't apply in the same way as they do with humans.
What's next for AI?
Focus on what's just around the bend, not miles down the road.
Complex Machinery 023: We'll fix it in post
The latest issue of Complex Machinery: Correcting an LLM after it's already out in the wild.