Three alternatives to developing a public-facing AI chatbot

Will we eventually have AI chatbots that don't get coaxed into revealing trade secrets and spewing nonsense? Quite likely.

But do we have those now? Well … Consider the steady supply of "chatbot gone wrong" incidents. Like this latest entrant:

"DPD AI chatbot swears, calls itself ‘useless’ and criticises delivery firm" (The Guardian)

LLM-based AI chatbots are still new territory. We're all still sussing out their dark corners and pointy edges. As such, a public-facing corporate chatbot usually represents a high-risk, low-reward scenario.

Yet, companies keep deploying them. And winding up in the news for all the wrong reasons.

These companies can and should look into ways to protect their chatbots, like checking prompts and limiting the training dataset. They can also consider some alternatives:

1/ Develop a menu-driven self-support tool. This structured, deterministic system is reminiscent of customer helpline call trees ("press 1 to check your order") but can also be implemented with a visual interface.

They can be tough to build. And frustrating to use if the system wasn't designed well. But they will never wind up as front-page news for talking trash, because the system will simply give up if someone asks for something it wasn't built to do.

2/ Implement a search system. This requires building a trove of documents about your product, then maintaining those documents over time.

Similar to the structured UI mentioned above, a search system can only return verbatim what you've put into it. You can only be held to what you've actually written. So long as you haven't made any outrageous claims in those docs, you're fine.

3/ Properly staff your customer support channels. E-mail, phone, or live chat… Whatever means customers use to contact you, make sure you have plenty of people available to address questions.

This is the most expensive alternative to an LLM chatbot (and call center costs are one reason companies are turning to chatbots in the first place) but sometimes it's better for your customers to speak to people. You can supplement your support teams by building the menu and search systems mentioned above.

Looking at this list, you can see the appeal of deploying an AI chatbot: "just load it up on our data and turn it loose." While you should do a ton of prep work before releasing one, you don't have to . The instant gratification gives you a (misguided) sense of accomplishment. And a false sense of security, to boot ...

So while that chatbot looks cheaper than building a search system or hiring customer support staff, that equation shifts once you factor in the reputation risk.

The three approaches I've described above all capture your company's intent so much better than an AI chatbot is currently able.

Rogue chatbots are hardly the only AI risk lurking in your company. Looking for a review of your strategy, products, practices, and team? Reach out for an AI assessment: https://riskmanagementforai.com/services/

Weekly recap: 2024-01-21

random thoughts and articles from the past week

Weekly recap: 2024-01-28

random thoughts and articles from the past week