Until recently, Meta allowed its genAI chatbots to interact with teen users on some rather touchy subjects. They've now agreed to modify their systems so this does not happen:
The company says it will now train chatbots to no longer engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Meta says these are interim changes, and the company will release more robust, long-lasting safety updates for minors in the future.
Meta spokesperson Stephanie Otway acknowledged that the company’s chatbots could previously talk with teens about all of these topics in ways the company had deemed appropriate. Meta now recognizes this was a mistake.
It's good that Meta is making this change. But it's troubling that the rules ever allowed this in the first place. Chatbots should never have engaged teens -- or, frankly, many adults -- on those topics.
What you don't know
The relationship between AI knowledge and experimentation.
Synthetic personalities
Virtual influencers raise questions and also introduce a shift in the risk/reward tradeoff.