Dall-E, Stable Diffusion, and now ChatGPT have made "generative AI" a household term. They've also led people to develop certain views of what this technology represents and how it impacts society.
I think this is a good time for everyone to develop a deeper understanding of what ML/AI (not just ChatGPT) really is and how it works.
Generative AI systems are built on machine learning (ML) models. An ML model represents the patterns (correlations) that an algorithm has uncovered in a training dataset. And when a model "generates content" or "makes predictions," it is effectively replaying that training data based on those patterns.
That's a mild oversimplification for the sake of brevity, but that's pretty much it.
This field uses terms like "intelligence" and "understanding" to describe what goes on, but an ML model doesn't really have that.
The model has no real-world context, no opinions. It doesn't even really see words or images the way people do. (Deep down, it only sees a bunch of numbers. Which is why I often joke that ML is just "linear algebra with better marketing.")
ML models – generative AI, included – are tools for automation. Factory machines? They automate movement. Software? That automates firm business rules, like "if the customer has a thousand loyalty points and they've rented a small car then offer them a complimentary upgrade." ML/AI is like software, but instead of knowing the rules in advance, they uncover the rules – the patterns – by looking into data.
So, what does that mean for the world?
Can ML models be used to cause harm? As with any tool, a person or company can misuse it to cause harm. Yes. Remember that the technology itself is neutral; the problems are rooted in how people choose to use it. So if people train a generative model on trashy data that is full of hateful, divisive content, it will create hateful, divisive content. Garbage in, garbage out.
Can ML models be used for good? Similar to the previous point, tools can be used to help as well as to harm. It's up to the entity that wields it.
Can ML models take my job? This is really two questions disguised as one.
The first is whether an ML-based system would do the job as well as a person. The answer here is "most likely not." A single ML model usually handles a single task, but not an entire job. (Consider how many tasks you perform as part of your job.) So maybe a model could reduce a person's workload by offloading some portions of a dull, repetitive, predictable task. Emphasis on "maybe."
The second question is whether a cost-conscious employer would choose to use ML when it is not a good fit. And the answer is, "this depends on the employer." Some will choose to let dollar signs cover an ML model's obvious weaknesses. I think this is a bad idea, but this is not my decision.
I've read that Dall-E, Stable Diffusion, and ChatGPT are based on stolen content. Is this true? From what I understand, some of the more popular models have been trained on data from the public web. Whether this content was "stolen" – that is, whether people broke the law – is for the courts to decide.
And there you have it.
What I've written is hardly everything there is to know about generative AI tools. But it should be a start in helping you understand what they really do, what they can't, and what impact they may have on your world.
Weekly recap: 2023-02-12
random thoughts and articles from the past week
When good metrics are bad
Some metrics are good, some are bad. But even the good ones can be bad if you're not careful.