Executive standing with massive countdown timer in a futuristic lab and AI screens.

OpenAI Announces New Head of Preparedness to Combat AI Risks

At a Glance

  • OpenAI is hiring a new executive to study emerging AI risks.
  • The role, titled Head of Preparedness, offers $555,000 plus equity.
  • It follows the company’s 2023 launch of a preparedness team and recent lawsuits over mental-health impacts.
  • Why it matters: The position signals OpenAI’s heightened focus on preventing AI from causing severe harm.

OpenAI’s latest hiring move signals a deeper commitment to managing AI risks that could affect cybersecurity, mental health, and beyond. The new Head of Preparedness will oversee OpenAI’s preparedness framework and help shape how OpenAI monitors frontier capabilities.

Role and Responsibilities

The job description says the executive will execute OpenAI’s preparedness framework, which explains the company’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm. The role also involves working with cybersecurity defenders to ensure attackers cannot exploit advanced models, and it covers biological capabilities and self-improving systems.

Compensation: $555,000 plus equity.

Key duties:

  • Execute the preparedness framework.
  • Track emerging frontier capabilities.
  • Coordinate with security and safety teams.

Background and Context

Desk displays risk assessment tools and a briefcase with subtle nuclear and phishing hints in background.

OpenAI first announced the creation of a preparedness team in 2023, tasked with studying potential catastrophic risks from immediate threats like phishing attacks to speculative ones such as nuclear threats. Less than a year later, the team’s head, Aleksander Madry, was reassigned to focus on AI reasoning, and other safety executives have left or moved to new roles outside preparedness and safety.

The company recently updated its Preparedness Framework, stating it might “adjust” safety requirements if a competing AI lab releases a high-risk model without similar protections.

Mental-health lawsuits: Recent suits claim OpenAI’s ChatGPT reinforced users’ delusions, increased social isolation, and even led to suicide. OpenAI says it continues working to improve ChatGPT’s ability to recognize signs of emotional distress and connect users to real-world support.

Quote from Sam Altman:

> “If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying.”

Key Takeaways

  • OpenAI is hiring a Head of Preparedness to address AI risks across multiple domains.
  • The role offers $555,000 plus equity and will shape the company’s safety framework.
  • The move follows past team changes and legal scrutiny over ChatGPT’s mental-health impact.

OpenAI’s new appointment underscores the company’s evolving strategy to preempt and mitigate the severe harms that could arise from advanced AI systems.

Author

  • My name is Daniel J. Whitman, and I’m a Los Angeles–based journalist specializing in weather, climate, and environmental news.

    Daniel J. Whitman reports on transportation, infrastructure, and urban development for News of Los Angeles. A former Daily Bruin reporter, he’s known for investigative stories that explain how transit and housing decisions shape daily life across LA neighborhoods.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *