Technology & Science

OpenAI Posts $555k ‘Head of Preparedness’ Vacancy After Safety Leadership Gap

On 29 Dec 2025 OpenAI publicly advertised a new Head of Preparedness role—vacant since mid-2024—offering a $555,000 salary plus equity to rebuild its AI-risk safety arm, a move CEO Sam Altman flagged as immediately high-pressure.

Focusing Facts

  1. Job listing went live 29 Dec 2025 with stated base salary of $555,000 and equity, per OpenAI careers page.
  2. The position has been unfilled since July 2024, when former head Aleksander Madry shifted to an AI-reasoning post.
  3. OpenAI is simultaneously fighting at least two U.S. wrongful-death lawsuits (filed 2025) that allege ChatGPT interactions contributed to suicides and a murder–suicide.

Context

Tech companies rarely elevate risk oversight to C-suite parity; the last comparable scramble was the 1947–1950 creation of the U.S. Atomic Energy Commission after physicists raised existential alarms about uncontrolled fission research. OpenAI’s rush echoes that moment: breakthrough capability raced ahead of governance, then bureaucracy was hastily erected around the threat. The job ad signals a structural trend—AI labs shifting from abstract principles to operational safety pipelines as models cross into cyber, bio, and mental-health domains once reserved for states. If these pipelines mature, future AI could resemble civil aviation (zero-fatality aspiration achieved over 50 years); if not, we may replay the early 1990s internet security chaos, scaled to cognition. Either way, the hire matters less for the individual than for the precedent: in 2125 historians may mark 2025 as the year commercial AI firms began institutionalising self-constraint—or admitted they could not police themselves, inviting external regulators to step in.

Perspectives

Tech industry trade outlets

e.g., Tech Times, CIOLHiring a Head of Preparedness shows OpenAI is proactively building rigorous safety infrastructure as AI enters a new era of real-world risk. Coverage leans on OpenAI’s own framing, largely trusting the company’s statements and downplaying lawsuits or internal safety resignations, reflecting an industry-friendly angle that prizes innovation momentum.

Left-leaning mainstream media

e.g., The Guardian, Yahoo! FinanceThe vacancy underscores alarming gaps in oversight, with experts fearing advanced AI could harm humanity while firms like OpenAI largely regulate themselves. Stories stress worst-case scenarios and dramatic language ("impossible job," AI may "turn on us") to press for stricter external regulation, amplifying fear more than technical nuance.

Business/financial outlets spotlighting compensation

e.g., Gulf News, Entrepreneur, Economic TimesThe $555,000 salary and equity package signal how valuable and high-stakes AI-risk leadership roles have become inside profit-driven tech firms. By foregrounding the eye-catching pay and career upside, these pieces risk trivializing deeper ethical concerns and cater to reader interest in lucrative jobs rather than systemic safety debates.

Go Deeper on Perplexity

Get the full picture, every morning.

Multi-perspective news analysis delivered to your inbox—free. We read 1,000s of sources so you don't have to.

One-click sign up. No spam, ever.