Tech

Following Employee Revolt, OpenAI Establishes New Safety Board

Published

on

OpenAI announced on Tuesday the establishment of a new committee dedicated to advising the company’s board on matters of safety and security. This move comes just weeks after the company dissolved its previous AI safety team.

According to a blog post by OpenAI, the newly formed committee will be spearheaded by CEO Sam Altman, along with Bret Taylor, the board chair, and board member Nicole Seligman. The creation of this committee follows significant changes within the company’s leadership and strategy concerning AI safety.

Earlier this month, Jan Leike, an OpenAI executive focused on safety, resigned from his position, citing insufficient investment in AI safety work and escalating tensions with the company’s leadership as his reasons. Leike’s departure highlighted growing concerns within the organization about its commitment to AI safety.

In addition to Leike’s exit, Ilya Sutskever, a prominent figure in OpenAI’s “super alignment” team, also left the company. The superalignment team was tasked with ensuring that AI development aligned with human needs and priorities. Sutskever had previously played a pivotal role in the unexpected removal of Sam Altman as CEO last year, a decision that was later reversed when Sutskever supported Altman’s return.

In response to these high-profile departures, OpenAI stated that dismantling the super alignment team and redistributing its members across the company would better facilitate the achievement of its super alignment objectives. An OpenAI spokesperson told CNN that this restructuring aimed to enhance the company’s focus on aligning AI development with its overarching goals.

In its blog post, OpenAI also revealed that it has started training a new AI model intended to succeed GPT-4, the current model powering ChatGPT. The development of this new model marks another step toward the company’s vision of achieving artificial general intelligence (AGI).

OpenAI expressed pride in developing and releasing models that lead the industry in both capability and safety. However, the company also welcomed robust debate at this critical juncture. The blog post emphasized the importance of ongoing evaluation and improvement of OpenAI’s safety protocols.

One of the initial tasks assigned to the new Safety and Security Committee will be to assess and refine OpenAI’s safety processes and safeguards over the next 90 days. At the end of this period, the committee will present their recommendations to the full board. Following the board’s review, OpenAI plans to publicly share an update on the adopted recommendations in a manner that maintains consistency with safety and security protocols.

This proactive approach reflects OpenAI’s commitment to addressing safety and security concerns amid the rapid advancement of AI technology. By establishing this committee and inviting open discourse, the company aims to reinforce its dedication to developing AI in a safe and responsible manner.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version