If you've seen enough of my posts or articles, you've heard me describe safety issues in the ML/AI space. Especially related to genAI and LLMs.
That's my perspective as a longtime AI practitioner who focuses on risk management.
Cyber security professional Charles Givre recently attended BlackHat and walked away with similar concerns: LLM security is still in its very early days and companies aren't thinking enough about the risks.
What's your take? Weigh in on the thread, which I've linked here.
Stopping the conversation
Using machines to say 'no'
Lessons from the Enron E-Mail dataset
The fall of Enron created an interesting and real-world dataset for text mining