Some ChatGPT interactions have wound up in public. Searchable, even. And some of the chats reveal people asking some rather unsavory questions.
This reminds me of Bill Tancer's book "Click: What Millions of People Are Doing Online and Why it Matters." The gist is that people tend to be more honest when typing into a search engine, rather than when speaking with a person. Why? Because they assume that only the machines will see what they're asking. People won't judge them.
This was a pretty eye-opening idea when Tancer's book landed in 2008. I figured it was kind of assume-from-diagram these days? We've become painfully aware of the ways private companies observe and mine our online activities for profit. We even gave it a name: "surveillance capitalism."
To be clear: I don't condone ChatGPT making these chats available to public search engines. Nor am I impressed by OpenAI's rather thin excuse – that this was a "short-lived experiment [...] to help people discover useful conversations" – but I'm not entirely surprised that a company, in 2025, would treat data this way.
New laws for therapy chatbots
Establishing boundaries for genAI bots
More of the same
Additional details on Delta's AI-based pricing