I've been posting short notes on LinkedIn as of late, and I figured that I should periodically collect them and post them here.
What you see here is the last week's worth of links and quips I have shared on LinkedIn (from Monday through Sunday).
For now I'll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
Why watch Seinfeld reruns when you can ... generate whole new episodes? Or better yet, one long episode that runs 24x7?
"Seinfeld: Künstliche Intelligenz parodiert 24 Sunde pro Tag die Show über Nichts" (Der Spiegel)
(Coverage in English, courtesy of Vice. )
How soon till studios load up on GPUs? They could then churn out infinite content -- that is: "space on which to sell ads" -- with minimal human effort.
(Notice, I didn't say it'd be good content. But, perhaps, still better than reality TV?)
#ai
#gpu
#seinfeld
There's a lot to be said about this, so I'll focus on one thing:
This article demonstrates why Google has rushed to build a competitor to ChatGPT.
"An ML model" and "a search" engine' may seem like very different animals, but they are flip-sides of the same coin:
Why does this matter?
Like a traditional search engine, ChatGPT was built on a lot of data from the public internet.
Unlike a search engine, ChatGPT summarizes those results. It doesn't hand you a list of links that you need to click through yourself. So for (certain types of) research, it's a step ahead of a plain search engine.
No wonder Google has been in a rush to compete with ChatGPT.
"Colombian judge says he used ChatGPT in ruling" (The Guardian)
Odd -- I thought the phase of AI-washing had ended. Maybe companies are laying claims to AI again now that crypto has lost some of its sheen?
"From Shoes to Insurance: Startups Latch Onto the AI Hype Cycle" (Bloomberg)
(What will be the next fad? One thought: maybe we'll rename "AI" again and reset the hype cycle around "collect and analyze data for fun and profit." )
#ai
#data
NIST defines a framework for #AI
-driven #risk management
This is a large, yet often overlooked, #risk when it comes to technology:
Product/app teams that, lacking domain/social knowledge, insist on fitting messy concepts into neat little boxes. And then treat an oversight as a corner case.
"This Is What Netflix Thinks Your Family Is" (The Atlantic)
This concern goes well beyond what Doctorow explains here about defining "family" for Netflix password sharing. Consider the YouTube and Facebook copyright infringement detection systems that didn't properly account for ... classical music. )
Food for thought as you build your next tech- or #AI-based system. Have you really thought through the use cases, and tested your assumptions/definitions?
If machines are going to do the work, it's important to know how to evaluate the machines:
#AI
#ChatGPT
#GenerativeAI
"At This School, Computer Science Class Now Includes Critiquing Chatbots" (New York Times)
The top failure modes of an ML/AI modeling project (Part 2)
Risk mitigation for your ML/AI projects
Some thoughts on generative AI
My take on tools such as Dall-E, Stable Diffusion, and ChatGPT