(Photo by georgi benev on Unsplash)
As a follow-up to what I said about about "that LLM is just somebody else's model":
One of the nice things about building your own models (from the predictive ML/AI to the generative kind) is that you know what went into it. The model has enough randomness that you can't control it, but you can influence its behavior. And that comes in handy when you're trying to debug it.
That third-party, AI-as-a-Service (AIaaS) LLM is a different story. Most of what you "know" about that model is what you surmise from testing it. You poke at it with various prompts, and cross your fingers that its outputs are stable enough for your use case. But you never know for sure what's in the box.
Once again: there's no empirically "right" or "wrong" answer here. It's all about understanding your business, your challenges, and your preferred risk/reward tradeoff. So long as you go in fully-informed, you'll be in a good spot.
That genAI LLM is just somebody else's model
To borrow a phrase about the cloud ...
Complex Machinery 020: Silence is golden
The latest issue of Complex Machinery: The bot doesn't always have to talk