My take on the Zillow Offers shutdown
2021-12-06 | tags: AI risk

Most people know Zillow as that app they use to window-shop for homes. The company has developed an interesting dataset over the years -- past sale prices, square footage, number of bedrooms, and so on -- and built some proprietary data products (such as its "Zestimate" home price estimates) on top of that.

It wasn't much of a surprise, then, when Zillow decided to expand its territory into home buying back in 2018. With the Zillow Offers program, the company would purchase homes with the intent to quickly sell ("flip") them to another buyer.

You can imagine that the company used ML/AI to develop pricing models. You can further imagine that this kind of business model required reasonably accurate price predictions such that Zillow could make a profit on the flips.

How did that turn out? Well, Zillow recently announced that it had overpaid on too many homes and was shutting down Zillow Offers.

Since I talk a lot about practical, business-focused ML/AI, a number of people have asked for my take on what happened with Zillow Offers. So I'll share a short version here.

(Please note that I have no insider knowledge of what happened at Zillow. Everything I say here is based on my (possibly hazy) recollection of what I've seen in public news reports.)

Do I think that this marks the end of the road for ML/AI?

Not at all.

This is neither the first nor the last time a company has encountered difficulty applying ML/AI to a problem. This story only made it to the news because Zillow is a big name.

There are other players in this space -- known as the "i-buying" field -- and it's entirely possible that someone will get it to work.

Who knows? Perhaps Zillow will have another run at it. Time will tell.

Why didn't this pan out for Zillow?

As I have zero inside knowledge of Zillow's operations, I can't say for sure.

From the outside, though, we can all see that Zillow is a large, well-established player in the home-data market. I think it's also reaosnable for us to assume that Zillow built solid ML/AI teams of experienced, talented people.

All of this gave Zillow Offers a very good chance at winning this game. But that's just a chance, not a guarantee. Not every business model built on predicting prices wins out. That's just a fact of ML/AI.

So was this a failure of ML/AI?

It's important to point out that "a failure for ML/AI to predict something" simply means that a project didn't pan out. It is not the same as a failure of the technology behind ML/AI. Nor is it a failure of the team(s) that worked on the project.

To put this all in perspective: predictions, in general, can be tricky.

Zillow Offers, in particular, was trying to predict prices on non-fungible assets (homes), which have fluctuating (usually: low) liquidity, in a market that has been thrown by a global pandemic (which saw shortages in both labor and materials). They definitely had some hurdles to clear.

(As a side note: I would maybe say that most homes are semi-fungible, in that a potential buyer might not see a meaningful difference between two similar homes in the same area. Maybe. But even semi-fungible is not entirely fungible.)

Does this mean that predicting home prices is a bad idea?

I think it means that Zillow didn't have the data to make the predictions to the level of accuracy it needed to make a profit.

From what I recall, Zillow Offers ramped up their buying when other i-buyers were ramping down on theirs. So it sounds like they were fairly ambitious, which may have led to some overly-optimistic risk management practices.

I expect that some companies will eventually figure out the right mix of features, with the right training data, to make this work.

I further expect that sorting out the price-prediction piece for the i-buyer companies will improve the liquidity of the wider housing market by reducing friction in the buying and selling process. (If people find it easier to conduct a real estate transaction, they'll likely move more often. That will lead to next-order effects such as improvements in moving companies and furniture buying.)

In the meantime, let's give Zillow some credit: they had a plan to make it easier to buy and sell homes. If you've ever sold a home, you know how much of a pain that can be: staging the property, scheduling showings, all while you still live there ... This goes double when people are spending more time at home due to the global pandemic. From what I understand, Zillow Offers made it possible to sell your property without any of that hassle.

So what's the wider lesson here?

As far as I'm concerned, the story of Zillow Offers is a cautionary tale of using ML/AI.

This was a real-world reminder that there are no guarantees of success in predicting the future. Doubly so when there are so many micro- and macro-level issues outside of your control.

When you employ ML/AI systems that make decisions, or that help humans make decisions, you need strong, realistic risk management practices (that apply to both human and machine) to stay out of trouble.

New Radar Article: "Remote Teams in ML/AI"

I've published an article on O'Reilly Radar: The key ingredient to a successful remote team? Leadership buy-in.

New DSS Podcast episode: Spatial Data and R&D Projects

My panel discussion with Linda Liu (Hyrecar) and Giacomo Vianello (Cape Analytics)