Technology & Science
OpenAI Advertises $555k ‘Head of Preparedness’ to Re-boot Internal Safety Unit
On 27-30 Dec 2025, Sam Altman publicly posted a vacancy for a new Head of Preparedness, reviving a role left empty since April and tasking it with running OpenAI’s risk-mitigation framework for frontier AI.
Focusing Facts
- Job listing offers up to $555,000 annual salary plus equity for a San Francisco–based executive to lead the Preparedness Framework, posted 27 Dec 2025.
- The role will control capability evaluations and launch-gate decisions across cyber, bio, mental-health and autonomous-self-improvement risks for GPT-5.x and future models.
- Previous Preparedness head Aleksander Madry was reassigned in 2024; interim leads departed by April 2025, leaving the position vacant until this announcement.
Context
Big tech firms often create internal safety czars only after a jolt—the Manhattan Project’s 1943 Health Division or Facebook’s 2016 ‘Integrity’ team—when risks begin to threaten legitimacy. OpenAI’s move echoes those moments: rapid capability gains, public lawsuits over suicides, and models finding zero-days resemble early nuclear criticality accidents and 1990s Microsoft security crises, prompting hurried institutional fixes. Strategically, this hiring reveals two intersecting long-wave trends: (1) private labs are racing to codify self-regulation to pre-empt external regulation, and (2) AI companies are normalising nine-figure burn rates while selling a narrative of existential vigilance to investors and society. Whether the new Preparedness chief becomes a genuine brake on deployment or a compliance shield will shape norms for all frontier labs. On a century horizon, the episode may mark either the embryonic formation of an internal ‘IAEA for algorithms’—setting precedents for model-launch inspections—or yet another instance where commercial incentives outpace in-house overseers, as with early chemical and fossil-fuel safety offices. History suggests the durability of such safeguards will depend less on one hire and more on whether independent, enforceable governance follows.
Perspectives
Indian business news outlets
e.g., The Financial Express, Mint, The Times of India, India Today — Present OpenAI’s hunt for a Head of Preparedness as a proactive, responsible move to strengthen AI safety while spotlighting the eye-catching $555,000 salary. Coverage leans on company press material and the big paycheque angle, soft-pedalling earlier leadership churn and pending lawsuits that might complicate the safety narrative.
US tech media
e.g., Engadget, RTTNews — Cast the job opening as OpenAI’s reaction to mounting criticism, wrongful-death suits and mental-health worries, signalling the firm is scrambling to predict and curb harms its models already cause. Emphasis on lawsuits and worst-case scenarios attracts readership but may overstate legal peril and underplay the company’s existing safety groundwork.
Financial commentary outlets
e.g., The West Australian quoting The Economist — Fold the hire into a broader argument that OpenAI’s soaring spending and cash burn make 2026 a make-or-break year, with costly new roles adding to an unsustainable juggernaut. Profit-and-loss lens encourages a skeptical tone that may downplay technical progress or the intrinsic value of investing heavily in safety.