Weekly recap: 2023-12-17
2023-12-17 | tags: weekly recap

What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/12/11: The LLM prompt you don't see

You've probably seen the term "prompt injection" mentioned in the context of AI/LLM chatbots. Prompt injection attempts to bypass an AI chatbot's security measures so it will say something inappropriate or divulge sensitive information.

Russell Horton recently shared this one with me: feed the bot an image … that includes invisible text … telling it to override its safety instructions.

"To hack GPT-4's vision, all you need is an image with some text on it" (The Decoder)

This reminds me of the "SQL injection" attacks from the early days of the web. Website forms were a little too trusting, so an attacker could include some database instructions alongside that contact form or ecommerce checkout workflow. (See the xkcd tale of "Little Bobby Tables" for an explanation.)

Web developers have since developed techniques to sanitize those form inputs, but AI developers are still learning the full spectrum of possible attacks against their products.

Just like I said last week:

Slowly but surely, we're learning that "build the robot" was the easy part.

Creating safeguards around it? That'll be the long slog.

2023/12/12: Malicious prompts, at scale

I've mentioned "red-teaming" before in the context of LLMs (AI chatbots). By crafting attacks against your own LLM infrastructure, you stand a chance of closing off vulnerabilities before bad actors can exploit them.

The thing is, thinking up attacks takes time and creative energy. Your team can only come up with so many ideas in one day.

So … this group organized a competition to generate malicious LLM prompts at scale:

"Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition"

Of particular interest is one of the paper's conclusions:

Due to their simplicity, prompt based defense[s] are an increasingly well studied solution to prompt injection (Xie et al., 2023; Schulhoff, 2022) However, a significant takeaway from this competition is that prompt based defenses do not work. Even evaluating the output of one model with another is not foolproof.

2023/12/13: Coming down from the easy money

An article on the current commercial real estate (CRE) market woes made me think of AI.

For background, here's the article:

"Commercial property confronts the ‘comedown’ from easy money" (Financial Times)

The gist is that CRE thrived the last decade or so on low interest rates. Now that rates are rising and the bill is coming due, the sector has some painful adjustments ahead:

Real estate veterans point out that the industry functioned for decades with higher rates. The challenge for an asset class that relies heavily on debt is navigating the shift after more than a decade of ultra-cheap borrowing.

All of which makes me wonder: when will AI have that moment?

The world is riding high on AI right now, trying to jam it into damned near every possible product. (Some of these products will work out. Many will not.) It's "cheap" to propose AI these days because execs will rarely question the expense. But eventually, we'll run out of ways to reset the hype cycle, stakeholders will demand to see more details than just "it's AI!" to green-light a project, and AI will have to earn its keep.

Are we, as a field, ready for this kind of market correction?

What are you doing to prepare your AI-related projects for a wind-down of the hype cycle?

Weekly recap: 2023-12-10

random thoughts and articles from the past week

Weekly recap: 2023-12-24

random thoughts and articles from the past week