Weekly recap: 2024-03-24
2024-03-24 | tags: weekly recap

What you see here is the last week's worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2024/03/18: The latest Complex Machinery is out

Friday's Complex Machinery newsletter covered a topic I've spoken about before: the similarities between algorithmic trading and data science/ML/AI. And with algo trading being the more mature/established field of the two, it can tell us a lot about where AI might go next.

(Also in this issue: bots applying to jobs, The Long Train Effect, and an apology to the animal kingdom.)

Complex Machinery #003 – "It's a bot's world, we just live in it"

(You can subscribe to Complex Machinery to get this in your inbox whenever it's published. The next one should land in a couple of weeks.)

2024/03/19: Text, but not words

I have to admit: as much as I've talked about the need to test and red-team your AI chatbots … "use ASCII art to bypass safeguards" was not on my AI Safety Bingo Card:

"ASCII art elicits harmful responses from 5 major AI chatbots" (Ars Technica)

2024/03/20: When your company runs an "AI ideas challenge"

Light bulbs, glowing on a dark background. Photo by Juan Carlos Becerra on Unsplash.

(Photo by Juan Carlos Becerra on Unsplash )

Last week Solomon Kahn said that he'd spoken with an exec who had run an "AI Idea Challenge" in their company. The result? 600 submissions (!) of which … only 5 proved compelling.

I'm posting an extended version of my comment here so I can go into more detail, and so it's easier for people to share.

(Please be sure to go to Solomon's post to weigh in, by the by. Let's keep that thread going.)

In short: I'm not too surprised by the 5/600 result.

Frankly, I'd be more interested in sifting through the remaining 595 ideas to understand why they were not compelling. For example:

  1. No connection to revenue or other business need? (In my experience: common)
  2. Not viable, based on a misunderstanding of what AI can/cannot do? (Even more common!)
  3. Could be achieved by cheaper and/or deterministic solutions? (Less common, but worth noting)
  4. Viable AI project but the required skills/tools aren't within reach of team, budget, or timeline?
  5. Viable project, but already available through a vendor's AI-as-a-Service (AIaaS) API?
  6. Potentially viable, with high reward, but low chance of success?
  7. Potentially viable, with high reward, but also very high risk?
  8. Viable from an AI standpoint, but data is insufficient/unusable/unreachable?

The list goes on.

This "AI Ideas Challenge", interestingly enough, doubled as a health check on the company's AI overall situation. Breaking down the ideas into the aforementioned categories would tell the leadership team what kind of AI help they'd need to move forward.

… and so on.

Sum total:

Running an internal challenge is one way to generate ideas to use AI. You can accelerate that effort by retaining the services of an AI professional to develop the list of use cases.

https://www.linkedin.com/posts/solomonkahn_from-600-ai-use-cases-submitted-across-the-activity-7174434254527901697-hsZW

If your company could use this kind of guidance, contact me to get started.

2024/03/21: Taking a wider view

As a francophone who has written a ton of code over the years, the headline caught me:

"Universities Have a Computer-Science Problem: The case for teaching coders to speak French" (The Atlantic)

(I'll look past the use of the term "coder" here. While I'm hardly a fan, I recognize that it's become part of everyday speech.)

My hunch was correct. (Spoiler alert.) The lesson is not specific to the French language. Nor to any language at all. The article's point is that people who build tech products would do well to know more than tech. And I'll extend the author's coverage to include the entire product team, not just the developers.

I am biased because it aligns with points I have raised elsewhere. Most notably when I point out that new tech is all about the actual tech in the short run … but about policy, regulation, insurance, and similar matters in the long run.

That's because new tech inevitably collides with existing laws and everyday life. Once the novelty wears off, we need to figure out how to handle the frictions that come with The New Thing.

And some of that is needless, preventable friction. The kind that we could avoid if app teams had more experience thinking beyond the code and thinking about the wider ramifications of what they are building.

Otherwise we wind up with topics like safety and ethics as after-market bolt-ons rather than baked in from the start.

Weekly recap: 2024-03-17

random thoughts and articles from the past week

Complex Machinery 004: When your car talks to everyone but you

The latest issue of Complex Machinery: A look into auto manufacturers passing driver details to data brokers