OpenAI has opened a search for an OpenAI head of preparedness—a senior “safety chief” role advertised at $555,000 plus equity—as the company rebuilds key safety leadership after a run of high-profile departures and internal reshuffles across its risk and alignment groups.
What OpenAI is hiring for
OpenAI’s Head of Preparedness role centers on running the company’s Preparedness framework—its process for tracking frontier-model capabilities that could create new risks of “severe harm,” and coordinating evaluations, threat models, and mitigations into an operational safety pipeline.
The job is positioned as a cross-functional leadership post: guiding technical work across multiple risk areas (commonly framed as domains like cyber and bio) and ensuring evaluation results influence launch decisions and safety cases.
Multiple listings and reproductions of the job details describe the compensation as $555,000 plus equity and location in San Francisco.
| Item | What’s disclosed publicly | Why it matters |
| Role name | Head of Preparedness | Signals a single accountable owner for frontier-risk readiness. |
| Core mission | Lead strategy/execution of the Preparedness framework; coordinate evaluations, threat models, mitigations | Ties “risk forecasting” to concrete go/no-go launch inputs. |
| Pay | $555,000 + equity | Suggests OpenAI is paying a premium for scarce senior safety leadership. |
| Related Preparedness hiring | Preparedness roles such as Research Engineer, Preparedness (Biology/CBRN) list $200K–$370K + equity | Shows a broader Preparedness build-out beyond one executive hire. |
Why the Preparedness role matters now
OpenAI has publicly described Preparedness as focused on catastrophic or severe-harm risks from increasingly capable frontier models—prioritizing capabilities that are plausible, measurable, severe, net new, and “instantaneous or irremediable.”
That framework framing matters because it pushes safety work beyond general content moderation into pre-deployment testing, risk thresholds, and mitigation design for high-impact misuse categories.
In practice, the Head of Preparedness is designed to connect technical evaluation work to real release governance—so that emerging capabilities and threat models translate into actionable safeguards and launch decisions.
Safety leadership turnover and the “exodus” backdrop
The new senior Preparedness search lands after a widely documented period of departures among OpenAI’s long-term safety and alignment leadership, including the exits of Ilya Sutskever and Jan Leike in 2024.
Following those departures, OpenAI dissolved (or integrated) its Superalignment team less than a year after announcing it, with members reassigned into other parts of the organization.
Public accounts around that period also highlighted internal disagreement about prioritization—especially concerns that safety, monitoring, preparedness, and societal impact were being outweighed by product momentum.
| Date (publicly reported) | Safety/oversight development | What changed |
| May 2024 | Senior alignment leaders departed | Key safety leadership continuity took a hit during a high-stakes release cycle. |
| May 2024 | Superalignment team dissolved/integrated | Work was folded into other research efforts; some members were reassigned. |
| May–June 2024 | Board-level Safety & Security Committee announced and later updated | Added formal board oversight structure for “critical safety and security decisions.” |
| Apr 2025 | OpenAI published an updated Preparedness Framework | Publicly clarified how high-risk capabilities are categorized and prioritized. |
| Dec 2025 | Head of Preparedness role posted with $555K + equity | OpenAI signaled renewed emphasis on a top owner for frontier-risk preparedness. |
How OpenAI’s safety governance is evolving
In May 2024, OpenAI’s board created a Safety and Security Committee tasked with making recommendations on “critical safety and security decisions” across OpenAI projects and operations.
OpenAI said the committee’s first assignment was to evaluate and further develop its processes and safeguards over 90 days, then deliver recommendations to the full board—followed by a public update on adopted recommendations consistent with safety and security.
OpenAI’s announcement also listed a mix of internal technical and policy leaders participating, including the Head of Preparedness (Aleksander Madry at the time), the Head of Safety Systems, the Head of Alignment Science, the Head of Security, and the Chief Scientist, alongside board leadership.
Final thoughts
OpenAI’s decision to recruit an OpenAI head of preparedness at $555,000 plus equity formalizes a single executive owner for frontier-risk evaluation and mitigation at a time when AI labs face intensifying scrutiny over how safety work influences real product launch decisions.
The key question for industry watchers and regulators is not only who fills the role, but whether Preparedness outputs consistently translate into release gates, mitigations, and transparent updates that match the company’s published framework.
The hiring also signals that OpenAI is continuing to staff Preparedness as a function—alongside other safety-related roles—rather than treating it as a short-term initiative tied to any single team structure.






