Many genAI risks aren't rooted in the technology itself, but in how it's (mis)used and (mis)managed.
Character AI offers the latest example. The platform hosts a number of chatbots modeled after murderers, many of which are discoverable through a simple keyword search.
This isn't an AI-technology problem. It's an AI-product-management problem.
And it's the sort of problem you can avoid if you take a risk-thinking approach to your AI product efforts. That includes "perform a thorough red-teaming exercise" and then "heed the results."