OpenAI split its trust and safety team, creating three separate groups taking on AI risk.

OpenAI split its trust and safety team, creating three separate groups taking on AI risk.

The Information reported that OpenAI has abandoned finding a replacement for trust and safety head Dave Willner, who stepped down in July. Instead, it’s replacing the division with teams dubbed Safety Systems, Superalignment, and Preparedness teams.

The company said in a blog post that Safety Systems will focus on the safe deployment of advanced AI models and artificial general intelligence while Superalignment works to align human and AI intelligence that surpasses humans, and Preparedness will do safety assessments for foundation models.


ncG1vNJzZmivp6x7tbTEr5yrn5VjsLC5jmtna2tfZn9wfZdoaW1oYGuAdIWOqKeeppGeerS8y6KrZqGkqHq1vtSsq2aZnpl6tK3FnquyZaSarq55wqucmqyZo7RuwMernJ5lo5q9or7ArZxmn6KkwrG%2FjK2YpKGenHqwuoyaoGaqmai4