What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.
For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.
Companies often say that a remote team just can’t hold a candle to an in-office team. And the thing is … they’re usually correct!
They’re correct because they haven’t really tried to make remote work a success.
Did your company treat remote work as a holdover until things “got back to (pre-pandemic) normal?” That’s your sign right there.
How can you design your remote teamwork for success, then?
We can borrow a technique I’ve used to explore business models. Instead of asking “does this work?” I ask “under what circumstances could this work? If we had to make this work, what would it take?”
Applying that to your remote work plans:
- Instead of saying “people can’t collaborate on a remote team,” ask yourself: " how can we help people collaborate effectively in a distributed setting?"
- Instead of saying “we need people in the office X days a week,” ask yourself: " why do we need people in the office?"
- Instead of saying “the office is a magic place,” ask yourself: “what, specifically, is magic about the office? How could we get the essence of this magic in a different environment, if we needed to do so?”
The key is to break out of your in-office autopilot and see what’s really important about the work. Maybe you genuinely need an in-office team. Maybe the hybrid setup is the way to go. But if you’ve never worked through those questions, how would you know?
(For more details, check out my 2021 Radar article “Remote Teams in ML/AI.” Most of what I wrote will apply to any tech team, not just AI teams.)
Why am I thinking about this now? Frankly, I’ve been thinking about this for more than a decade. And when I see companies struggle with their RTO plans, one common element is that they assume an in-office arrangement.
This article describes one company’s RTO plan. I’m curious to see how it works out long-term:
Smucker’s chief people officer, Jill Penrose, faced a dilemma: How could she tell corporate employees who successfully did their jobs remotely for years that working at home no longer worked?
Smucker needed to come up with a policy that worked for the business and that could be explained in a way that made sense to staff, Penrose said. “Every company was dealing with employees for two years who had been performing quite well working remotely,” she said.
Interesting use of ChatGPT: provide a boost to students who don’t have access to one-on-one, personalized assistance.
(Granted, this is far from ideal. But it could make for an interesting test of the “AI-based assistant” concept.)
“‘A real opportunity’: how ChatGPT could help college applicants” (The Guardian)
According to Rick Clark, Georgia Tech’s assistant vice-provost and executive director of undergraduate admission, AI has the potential to “democratize” the admissions process by allowing the kind of back-and-forth drafting process that some students get from attentive parents, expensive tutors or college counselors at small, elite schools. “Here in the state of Georgia the average counselor-to-student ratio is 300 to one, so a lot of people aren’t getting much assistance,” he told me. “This is a real opportunity for students.”
Google’s Bard and SGE bots apparently hold some rather extremist views:
“Google’s AI Bots Tout ‘Benefits’ of Genocide, Slavery, Fascism, Other Evils” (Tom’s Hardware)
(Credit where it’s due: I originally found this in Der Spiegel.)
Yes, yes, I realize that these bots don’t actually “know” anything. They’re just emitting text based on (grammatical, sequence-of-word) patterns uncovered in a mountain of training data. But as far as the general public is concerned, Bard and SGE may as well speak and hold opinions.
So before you laugh at Google, ask yourself about your company’s AI risk management practices:
1/ What are you doing to limit the risks of a misbehaving AI chatbot?
2/ What happens if, despite your efforts, this digital brand ambassador still manages to say something terrible?
3/ How would your brand recover?
For #1 and #2, remember that Google is a well-funded company, with a ton of AI experience, and staffed with some extremely bright people. If Google can stumble with an AI chatbot, anyone can.
As for #3 … Google is large enough to absorb this reputation hit. You’re not.