Weekly recap: 2023-12-03

Posted by Q McCallum on 2023-12-03

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/11/27: Not the news coverage you want

Are you responsible for building an AI-driven product? You probably have nightmares about this kind of news coverage:

Instagram’s Algorithm Delivers Toxic Video Mix to Adults Who Follow Children” (WSJ)

Imagine you’ve incorporated AI into your product or workflow. And then it makes headlines because it’s doing something terrible. Following that, some clients bail. That’s what’s happening at Instagram:

Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.

Social media platforms are in an interesting bind here. They face the usual AI concern, that every model will be wrong some percentage of the time. And since they operate on such a large scale, even being wrong a tiny fraction of the time can still mean a lot of wrong answers.

Beyond that, these platforms are built on competing needs:

1/ Recommendation system: keep people engaged by recommending more and more content, in order to show more ads.

2/ Content moderation / safety: stop Bad Things™ from spreading. This makes the platform a safer place to be, and it keeps you out of the news.

The more you enable one, the more likely you weaken the other. And while #1 is more likely to pay off on a day-to-day basis, all it takes is one incident with #2 to throw everything off-track.

2023/11/28: People are messy

We’ve seen plenty of talk about generative AI chipping away at office jobs and work in the film industry. It may also take a bite out of the influencer world:

An agency created an AI model who earns up to $11,000 a month because it was tired of influencers ‘who have egos’” (Business Insider)

This article mentions that the agency didn’t like (human) influencers’ “egos.” Fair enough. Another reason to favor virtual brand ambassadors over real people? The risk:

People are messy. People have backstories. Even someone with a relatively clean history may go awry in the future. So when you put a real person’s face on your product, you are crossing your fingers that they don’t do (or haven’t already done) something damaging.

I wrote about this last year, in relation to a virtual web3 music group:

When a label backs a band, or when a film studio backs an actor, they’re investing in high-profile people with real lives and real personalities. It’s entirely possible that there will be some messy story in the press. The scandalous love affair. The shocking drug habit. The old, racist tweet rant that somehow slipped through the nonexistent due-diligence exercise.

Every time one of those celebrities gets in trouble, it represents a potential cash leak for their investors. Maybe they’ll follow a sin/redemption arc and come back even more bankable. Or maybe their careers will crater, and the remaining albums on that contract are doomed to never be released. We imagine that record labels would love to close off those sources of risk.

So, [the virtual band members]? They only have the life and personality that they are given. They only “exist” when and where the company wants them to. They can’t get into trouble.

2023/11/30: It works. But also, it doesn’t.

Adding Sports Illustrated to the list of publications that have (allegedly) posted out AI generated content without marking it as such:

Sports Illustrated Published Articles by Fake, AI-Generated Writers” (Futurism)

Every time I hear that some website has done this, I ask myself: what does that say about how it values its readers? and its mission?

I mean, I get the economics behind it: when you’re paid for ad clicks, having AI generate tons of low-grade content gives you more surface area to post ads.

But this is one of those business models that only works in the economic sense. Everything else about it falls apart.

(And when you factor in the impact of getting caught … This does not seem like a reasonable risk/reward tradeoff.)

2023/12/01: Not the news coverage you want, redux

I’ve said it before, I’ll say it again: “content moderation is hard.” And not just for technology reasons.

In “How Your Child’s Online Mistake Can Ruin Your Digital Life,” reporter Kashmir Hill (author of Your Face Belongs to Us) explains how a YouTube content moderation system, ostensibly designed to protect children online, caused a parent to get locked out of all of her related Google accounts.

If we zoom out, there are two big ways for content moderation to go awry:

1/ False negative: If bad content slips through, then you’ve helped that bad content to spread. End-users lose faith in your platform because they see it as a cesspool.

2/ False positive: If you mistakenly flag clean content, that stops people from expressing themselves in ways that they thought were permitted. End-users lose faith in your platform because enforcement feels arbitrary.

Item #2 is especially tricky when enforcement goes all the way to banning (alleged) offenders from the platform. If your company does this, now would be a great time to review what systems make those enforcement decisions (“automated” or “human review”?), the rules by which they operate, and the circumstances under which those actions can be reversed.

Going back to the New York Times story linked above, here is something you never want printed about your company in a major news outlet:

[Google] had no response for how to escalate a denial of an appeal beyond emailing a Times reporter.