Weekly recap: 2023-07-30

Posted by Q McCallum on 2023-07-30

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/07/24: Let the bots do the heavy lifting

Companies love to talk about AI “replacing” human labor. That’s a good line for a pitch deck, sure. But the reality is that “AI plus human” will often be stronger than “AI instead of human.” AI is a force multiplier for human effort.

Hence my usual refrain:

1/ Let the machines do the work that is dull/repetitive/predictable.

2/ Have plenty of people around to double- and triple-check the machines’ work.

Medical billing is a suitable candidate for this arrangement. It’s dull, it’s repetitive, but it’s just unpredictable enough that you need people around to make sure the machines’ output is correct.

This WSJ piece describes one company doing just that:

At Startup That Says Its AI Writes Medical Records, Humans Do a Lot of the Work” (WSJ)

(Of note: the article also explores the potential issues around patient privacy in this arrangement, and what level of training people need in order to check the machines’ work…)

2023/07/25: Sentiment analysis for news

This is an article about using ChatGPT to analyze news headlines for trading purposes.

Even if you don’t work in the financial space, there are lessons here for applying AI to any field:

ChatGPT helped a simulated study return up to 512% trading stocks based on news. Here’s the prompt the researchers used — plus the positive and negative takeaways from their findings.” (Insider)

Of note:

1/ The researchers’ work was based on analyzing article headlines – not body content – for sentiment. That hints that headlines may contain a lot of useful info for certain types of machine-based analysis.

(This is especially interesting when you consider that headlines are often free to view, even if the accompanying body content is paywalled.)

2/ This isn’t just about AI. Some of what’s described here is useful for bringing any new technology into a trading environment.

(For those who haven’t worked in that space: money is on the line and you can’t hand-wave away bad results. So trading shops – the ones that survive, at least – mix caution and optimism when it comes to new tools.)

3/ Related to the previous point, consider what happens when all of your competitors have access to the same technology.

2023/07/26: The winning combination is “humans + AI”

Description: robot hand, open and reaching out.

(Photo by Possessed Photography on Unsplash)

I often share articles here about ways to support human effort with AI.

This blog post from 2021 summarizes my take:

Human/AI Interaction: Exoskeletons, Sidekicks, and Blinking Lights

Where could you benefit from exoskeletons, sidekicks, and blinking lights in your field?

2023/07/27: Repeating the Chief Data Officer (CDO) story

Remember about a decade ago, when companies were excited about data and AI? They went as far as creating a new role: the Chief Data Officer (CDO).

Quite a few of those newly-minted CDOs were simply hired too soon. Their employers bogged them down in unrealistic expectations of what data could do and how long it would take to get there. Many of them decided to move on. (According to a 2021 HBR article, the average tenure of a CDO was just two years.)

It looks like we’re about to go through this dance again?

AI jobs: No one knows what a head of AI does, but it’s the hottest new job” (Vox)

And while the head of AI job description varies widely by company, the hope is that those who end up with this new responsibility will do everything from incorporating AI into businesses’ products to getting employees up to speed on how to use AI in their jobs. Companies want the new role to keep them at the forefront of their industries amid AI disruption, or at least keep them from being left behind.

Hmmm. Speaking as a long-time AI practitioner and consultant:

Do I think hiring a Head of AI is a good idea? Under the right circumstances, yes! You definitely want someone with experience to guide your company on its journey through AI.

Do I think that a company can simply hire a Head of AI, step back, and expect magic to happen? Absolutely not.

How can companies improve their chances of success with a Head of AI role? I’ve written a couple of blog posts that may help:

And if you need help beyond that, please reach out. I can provide guidance around your company’s AI practices. In some cases, I can even serve as a fractional Head of AI: https://qethanm.cc/consulting/

2023/07/27: Cars, smartphones, and bloatware

I have a long-overdue blog post on cars being the new smartphones. I guess now I can add “vehicles shipping with bloatware” to that draft?

People are getting fed up with all the useless tech in their cars” (The Verge)

Naturally, it seems like most people are preferring to use smartphone-mirroring systems like Apple CarPlay and Android Auto, which have proven to be incredibly popular over the years. And indeed, there have been other surveys that indicate people prefer interacting with the apps on their phone than whatever cockamamie bullshit was cooked up by the company that made their car.

2023/07/28: Understanding your company’s privacy policy

When was the last time you read your company’s privacy policy?

For many of you, the answer is “never.” At some point your legal team wrote it based on some industry boilerplate – you know, the kind of vague language that essentially means “we can do anything with your data; we’re not even sure why the word ‘privacy’ is in the title, frankly” – and it became an afterthought.

You may want to take the time to really read it, though. Especially if you’re a CxO, stakeholder, or PR rep. Do you understand the full implications of the terms? Are you comfortable with what it says? When someone asks you about specific provisions therein, are you prepared to answer?

A journalist from The Atlantic recently asked traveler-verification company Clear about their privacy policy. The answer left a lot to be desired.

It starts off well:

_[Clear spokesperson Annabel] Walsh told me that the only thing Clear uses member data for is to provide services to its members, and that it does not monetize its data through promotional targeting. _

So far, so good, right? But then:

_The privacy policy that governs Clear’s services would at least seem to allow it, however. _

Hmmm.

The policy stipulates that Clear may use everything but its biometric and health data to conduct marketing and consumer research, contact members about products and services from its marketing partners, and “offer our consumers products or services we believe may be of interest to them.”

That’s not what you want to hear from a company that needs to collect so much information to provide its service.

The more sensitive the data you hold, the more you should promise to keep it in strict confidence.

A lot of discussion on data ethics focuses on how AI-based tools and various data analyses affect people. It’s important to keep that top of mind. But let’s not forget that those AI tools and analyses rely on data. How companies get that data matters.

The Perfect Service to Make Everyone at the Airport Hate You” (The Atlantic)

2023/07/29: Watermarking AI-generated content

Some of the larger tech/AI firms recently agreed to watermark AI-generated content:

OpenAI, Google will watermark AI-generated content to hinder deepfakes, misinfo” (Ars Technica)

I think this could prove useful, so long as those watermarks are difficult to fake and there’s a reliable service for verifying them. Preferably a third-party service.

Where have we seen this kind of need before? Hmm. I’m thinking of SSL certificates. We trust the SSL cert authority, who in turn trusts the party that applied for the cert, therefore we trust the website using the cert.

What will be the Verisign of AI-generated content watermarks?

This would not be the same as a system that evaluates the content itself to determine whether it came from an AI; instead, it would be a system that checks the content for the watermark.

The former has proven difficult. (Note the number of failed attempts to spot essays generated by ChatGPT.) The latter should be within reach.

(There’s room for debate as to whether “confirming content is generated” would be as useful as “confirming something is fake.” But that’s a story for another day.)

2023/07/30: Tiny problems add up

The latest example of “a few small, innocuous matters can collide and form a massive problem.”

Typo sends millions of US military emails to Russian ally Mali” (BBC)

(Original source: “A single typo misdirects millions of US military emails to Mali” (FT) )