You've probably seen headlines about Google and Meta coming together to target teenagers with ads.
There's a lot to be said about that, on various aspects of business and data ethics. But this excerpt stands out:
_On its website, Google says the "unknown" group "refers to people whose age, gender, parental status or household income we haven't identified". But staff at the internet group had thousands of data points on everything from users' location via phone masts to their app downloads and activity online. This allowed them to determine with a high degree of confidence that those in the "unknown" group included many younger users, in particular under-18s. _
Turning off other age groups for which they had demographic data left only the unknown group, with its high proportion of minors and children: it was described as a way of "hacking" the audience safeguards in their system, one of the people said
(Source: Financial Times, "Google and Meta struck secret ads deal to target teenagers")
Remember the old "oh it's only metadata" argument from way back when? This is "oh it's only a group we don't expressly identify, but by process of elimination we can kinda identify."
I try to give the benefit of the doubt in situations like this – in large companies, there are lots of groups involved and they don't always communicate, leading some decisions to fall between the cracks. So I can imagine that someone didn't think through the ramifications of keeping data on this "unknown" group.
But it's hard to imagine that using this data for advertising was done by accident and/or with good intentions in mind.
The state of LLM security
I've said it before, and I will have to say it several more times
Stopping the conversation
Using machines to say 'no'