OpenAI CEO Sam Altman was fired for “outright lying,” says former board member

A former OpenAI board member explained why directors decided to fire CEO Sam Altman last November. To speak in conversation on TED AI Show podcast, AI researcher Helen Toner accused Altman of lying and obstructing the OpenAI board, retaliating against those who criticize him and creating a “toxic atmosphere”.

“The [OpenAI] The Board is a not-for-profit board that was established expressly to ensure that the company’s mission of serving the public interest is primary – ahead of profits, investor interests and other things,” Toner said. TED AI Show hosted by Bilawal Sidhu. “But for years, Sam made it really difficult for the board to do that job by withholding information, misrepresenting things that were going on in the company, and in some cases outright lying to the board.”

OpenAI fired Altman on November 17 of last year, a shock that surprised many inside and outside the company. According to Toner, the decision was not made lightly and involved weeks of intense discussion. The secrecy surrounding it was also intentional, she said.

“It was clear to all of us that once Sam had an inkling that we might do something to go against him, he would do everything in his power to undermine the board to prevent us, you know, even getting to the point , when I could have fired him,” Toner said. “So we were very careful, very judicious about who we told, which was basically no one other than, of course, our legal team beforehand.

Unfortunately for Toner and the rest of the OpenAI board, their careful planning did not yield the desired result. While Altman was initially ousted, OpenAI quickly rehired him as CEO after days of outrage, accusations and uncertainty. The company also implemented an almost entirely new board that removed those who tried to oust Altman.

Why did OpenAI’s board fire CEO Sam Altman?

Toner didn’t specifically talk about the aftermath of this tumultuous time in the podcast. However, she explained exactly why the OpenAI board concluded that Alman had to leave.

Earlier this week, Toner and former board member Tasha McCauley published an op-ed Economist stating that they decided to oust Altman due to “longstanding patterns of behavior”. Toner has now provided examples of said behavior in her interview with Sidhu – including a claim that OpenAI’s own board was not disclosed when ChatGPT was released, only found out through social media.

“When ChatGPT came out [in] November 2022, the council was not informed about this in advance. We found out about GPT on Twitter,” Toner claimed. “Sam did not inform the board that he owned the OpenAI startup fund, even though he consistently claimed to be an independent board member with no financial interest in the company. On numerous occasions he provided us with inaccurate information about the small number of formal security processes the company had in place, meaning it was essentially impossible for the board to know how well those security processes were working or what might change.”

SEE ALSO:

OpenAI is launching a new internal security team under the control of Sam Altman

Toner also accused Altman of deliberately targeting her after he objected to a research paper she co-authored. The article, titled “Decoding Intentions: Artificial Intelligence and Costly Signals,” discussed the dangers of AI and included an analysis of both OpenAI’s and competitor Anthropic’s security measures.

However, Altman reportedly found the academic work too critical of OpenAI and complimentary of its rival. Toner said TED AI Show that after the papers were published last October, Altman began spreading lies to other board members in an attempt to remove her. The alleged incident only further damaged the board’s trust in him, she said, as they were already seriously discussing firing Altman at the time.

Mashable Light Speed

“[F]or in every single case, Sam was always able to come up with some innocuous-sounding explanation why it wasn’t a big deal or it was misinterpreted or whatever,” Toner said. “But the end effect was that after years of things like that, all four of us they fired him [OpenAI board members Toner, McCauley, Adam D’Angelo, and Ilya Sutskever] concluded that we just couldn’t believe the things Sam was telling us.

“And that’s a completely dysfunctional place to be on a board, especially a board that’s supposed to provide independent oversight of the company, not just like, you know, help the CEO get more money. Not trusting the word of the CEO, who’s your primary intermediary for the company , your main source of information about the company, that is absolutely, absolutely impossible.”

Toner said OpenAI’s board has attempted to address these issues and implemented new policies and processes. However, other executives then reportedly began telling the board about their own negative experiences with Altman and the “toxic atmosphere he created.” This included allegations of lying and manipulation, backup screenshots of conversations and other documentation.

“They used the phrase ‘mental abuse’ and told us they didn’t think he was the right person to lead the company. [artificial general intelligence]They told us they didn’t believe he could or would change, there’s no point in giving him feedback, there’s no point in trying to fix these issues,” Toner said.

OpenAI CEO accused of retaliating against critics

Toner went on to respond to the loud outcry from OpenAI employees against Altman’s firing. Many posted on social media in support of the ousted CEO, while more than 500 of the company’s 700 employees said they would quit if he was not reinstated. According to Toner, employees were led to believe a false dichotomy that if Altman did not return “immediately, without accountability [and a] brand new board of his choice,” OpenAI would be destroyed.

“I can see why not wanting the company to be destroyed is what made a lot of people fall in line, either because in some cases they were going to make a lot of money from this upcoming offering or just because they love their team. they didn’t want to lose their jobs, they cared about the work they were doing,” Toner said. “And of course, a lot of people didn’t want the company to break up, including us.

She also argued that fear of retribution for opposing Altman may have contributed to the support he received from OpenAI employees.

“They experienced him getting back at people, getting back at them for past instances where he was critical,” Toner said. “They were really afraid of what might happen to them. So when some employees started saying, ‘Wait, I don’t want the company to fall apart, like, let’s bring Sam back,'” it was very difficult for those people who had a terrible experience to say , that out of fear that if Sam stayed in power, as he eventually did, it would make their lives miserable.”

Finally, Toner noted Altman’s turbulent work history, which initially emerged after his failed OpenAI firing. Pointing to reports that Altman was fired from his previous role at Y Combinator due to his alleged selfish behavior, Toner argued that OpenAI was far from the only company that had the same problems with him.

“And then at his job before that — which was his only other job in Silicon Valley, his startup Loopt — apparently the management team went to the board twice and asked the board to fire him for what they called ‘deceptive and chaotic behavior.’ Toner continued.

“If you really look at his track record, he doesn’t exactly have a glowing track record. It wasn’t a problem specific to the personalities on the board, although he would like to portray it that way.” “

Toner and McCauley are far from the only OpenAI alumni to express doubts about Altman’s leadership. Lead security researcher Jan Leike resigned earlier this month due to disagreements with management priorities and the argument that OpenAI should focus more on issues such as security, safety and impact on society. (Chief scientist and former board member Sutskever also resigned, although he stated his desire to work on a personal project.)

In response, Altman and president Greg Brockman defended OpenAI’s approach to security. The company also announced this week that Altman will lead the new OpenAI safety and security team. Meanwhile, Leike joined Anthropic.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top