What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
This looks like an AI problem on the surface. But it's really a policy problem.
Creating good policies is hard work! You need to something define that is all of:
1/ clear: so the people who want to do the right thing know what is allowed
2/ reasonably watertight: so that bad actors don't have large loopholes to exploit
3/ flexible: to adapt to future events
Absent any of the above, and you create a situation where something that is arguably "wrong" is still "allowed."
"Facebook rules allow altered video casting Biden as paedophile, says board" (The Guardian)
Here's another example of GenAI Gone Awry:
"‘Embarrassing and wrong’: Google admits it lost control of image-generating AI" (TechCrunch)
And here's my usual refrain:
1/ Enjoy the schadenfreude all you want. Just keep in mind, this could happen to any company deploying LLMs. Including yours.
2/ Google has the established presence and PR team to weather this kind of issue. Your company? Probably not.
Sum total: be careful with those LLM projects. Even when you try to do the right thing, it can still go wrong.
For more thoughts on this topic, check out my writeups on:
Weekly recap: 2024-02-18
random thoughts and articles from the past week
Complex Machinery 002: It's still a wild animal
The latest issue of Complex Machinery: the randomness inside every AI model