The inquiry into Sam Altman’s dramatic termination from OpenAI more than three months ago has come to a close. This is a major win for the prominent CEO as he attempts to take back control of the AI startup he helped build.
In a press conference on Friday, OpenAI stated that Mr. Altman, who rejoined the business only five days after being fired in November, had done nothing to warrant his dismissal and would be able to reclaim the one position on the board of directors that remained unclaimed by him.
Silicon Valley was taken aback by Mr. Altman’s dismissal, which also threatened the survival of one of the most significant startups in the IT sector. It also questioned whether OpenAI was prepared to lead the tech industry’s fervent focus on artificial intelligence, with or without Mr. Altman at the helm.
Mr. Altman agreed to an inquiry into the board’s activities and his conduct when he returned to OpenAI in November, but he was not given back his board position. The two members who voted to remove him also decided to resign; their non-company replacements led the WilmerHale law firm’s probe. The much-awaited investigation regarding the occurrence was completed, according to OpenAI board chairman Bret Taylor, however, the report was not made public by the business.
The legal firm’s assessment, according to the corporation, concluded that while the OpenAI board had the right to fire Mr. Altman, his actions did not require his dismissal.
Mr. Taylor mentioned Greg Brockman, the company president who resigned in protest after Mr. Altman was fired, saying, “The special committee recommended and the full board expressed their full confidence in Mr. Altman and Mr. Brockman.” “We are enthusiastic and fully behind Sam and Greg.”
In response to complaints regarding a lack of diversity on the board, OpenAI also added three women to the board: Fidji Simo, the CEO of Instacart; Sue Desmond-Hellmann, the former CEO of the Bill & Melinda Gates Foundation; and Nicole Seligman, the former general counsel of Sony.
One of the replacements named to the OpenAI board in November, Mr. Taylor, predicted that the board will keep growing.
The goal of the report and the new board members was for OpenAI’s management to put the turmoil surrounding Mr. Altman’s dismissal behind them. Numerous concerns concerning his leadership and the peculiar structure of the San Francisco company—a nonprofit board supervising a for-profit business—were raised by the occurrence.
However, OpenAI has left a lot of issues about the firm unanswered because it has not released the study. Insiders have questioned if Mr. Altman had an excessive amount of control over the conduct of the probe.
The two OpenAI board members who departed late last year, Helen Toner and Tasha McCauley, issued a statement saying, “As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.” “We trust that the new board will effectively oversee OpenAI and ensure that it stays true to its goals.”
At the Friday press conference, Mr. Taylor made an appearance with Mr. Altman. He said the study concluded that the previous board removed Mr. Altman in good faith, but it did not foresee the legal problems that would follow his termination. This was followed by the announcement of the new board members.
According to the review, the board’s choice was not motivated by worries about the security or safety of the product, Mr. Taylor stated. “It was just a lack of trust between Mr. Altman and the board.”
Following Mr. Taylor’s prepared remarks, Mr. Altman commended the company’s and its partners’ tenacity both during and following his dismissal. He remarked, “I’m glad this whole thing is over.”
A six-paragraph summary of the report was made available by OpenAI. According to the report, WilmerHale interviewed numerous people, including former board members of OpenAI, and examined 30,000 documents.
It concluded that the prior board’s justification and public justification for Mr. Altman’s termination—that he was not “consistently candid in his communications with the board”—were accurate. Additionally, it stated that the board had not expected its actions to cause instability within the corporation.
WilmerHale, according to the firm, briefed Mr. Taylor and Lawrence H. Summers, the former Treasury secretary who was also named to the board in November, orally about the study, which will not be made public.
According to Mr. Taylor, OpenAI has implemented several measures to enhance the way the business is managed, such as new board governance standards, a conflict of interest policy, and a whistleblower hotline.
The report summary from OpenAI failed to address the concerns raised by the company’s senior executives regarding Mr. Altman with the previous board. The chief technical officer of OpenAI, Mira Murati, and chief scientist Ilya Sutskever had concerns about Mr. Altman’s management style before his termination, citing what they described as his manipulative past.
Through an attorney, Dr. Sutskever has referred to the assertions as “false.” In a Thursday Slack message, Ms. Murati stated that she had given the board the same input that she had given Mr. Altman personally, but she had never contacted the board to voice those concerns.
“I am glad the independent review is over and we can all go forward together,” Ms. Murati wrote on X, the platform that was formerly known as Twitter, on Friday.
The Securities and Exchange Commission is still looking into OpenAI over the board’s conduct and the potential for Mr. Altman to have deceived investors. When a report is finished, companies that use outside legal firms frequently give it to public investigators.
The board spokesperson for OpenAI declined to comment on whether the report would be sent to the S.E.C.
In its most recent funding round, OpenAI, which was valued at over $80 billion, is at the forefront of generative A.I., or technology that can produce text, images, and sounds. Many think that the technology industry could see a similar profound transformation from generative AI as that of the web browser approximately thirty years ago. Some fear that technology could hurt society, contributing to the spread of false information online, eliminating a great number of employment, and possibly endangering humankind.
Mr. Altman embodied the industry’s drive toward generative artificial intelligence (AI) following the release of ChatGPT, an online chatbot by OpenAI in late 2022. Approximately a year later, the board abruptly fired him, stating that it no longer trusted him to lead the business.
Three founders and three independent members made up the remaining six members of the board. One of OpenAI’s founders, Dr. Sutskever, voted with the other three outsiders to remove Mr Altman from his positions as chairman and CEO, citing, without elaborating, his lack of “consistent candidness in his communications.”
Another founder, Mr. Brockman, left the company in disapproval. A few days later, Dr. Sutskever said that he had changed his mind about dismissing Mr. Altman and essentially resigned from the board, leaving Mr. Altman opposed by three independent members.
In 2015, OpenAI was established as a nonprofit organization. Three years later, Mr. Altman established a for-profit subsidiary and secured $1 billion from Microsoft. The nonprofit’s board, whose declared goal was to develop artificial intelligence for the good of humanity, kept total authority over the new division. Microsoft and other investors were not legally able to choose the company’s management.
Mr. Taylor, a former Salesforce executive, was chosen to take the position of two board members in an attempt to calm the chaos and get Mr. Altman back to the company. However, Mr. Altman did not get back on the board. In charge of managing the inquiry into Mr. Altman’s termination were Mr. Taylor and Mr. Summers.
Dee Templeton, vice president of technology and research partnerships at Microsoft, a key collaborator of OpenAI, holds a seat on the board as an observer. Microsoft refrained from commenting on the board and report on Friday.
Corporate governance experts criticized the new board for its lack of diversity. In November, Mr. Taylor stated to The Times that he would appoint “qualified, diverse candidates” to the board, candidates who represented “the fullness of what this mission represents, which is going to span technology, A.I. safety policy.”