A key OpenAI hire
2026-02-04

OpenAI has just filled a key role to "[ensure] that the company safely develops and deploys AI systems and prepares for the risks they pose."

This is a pretty big deal. As I've noted elsewhere, AI creates new opportunities and new dangers. Companies that build AI-based solutions need to think through those possibilities in order to protect themselves and others.

When I look at this OpenAI Head of Preparedness role, I have two questions:

1/ What will make the person in this role successful?

The real test will come down to how much leeway the hire has to actually fulfill the duties outlined in the job description. And that will, in turn, require that they get a real voice in key decisions around what to build and how it will be used.

Also, this role should have the authority to spread ideas (and, importantly, obligations) of AI risk management throughout the company. If they're the only person concerned with this topic, that's a bad sign.

2/ What is -your- company doing to assess and address risks related to AI?

By hiring a Head of Preparedness, OpenAI has given you an opening to raise this point with your fellow executives.

You probably don't need to hire someone at $550k/yr. But you'd do well to reflect what I said above. You'll need to think through the potential upsides and downsides of using AI, then act accordingly.

Does your company need to kickstart its AI preparedness and AI risk efforts? Reach out. I can help.