News

OpenAI Board Defends CEO Sam Altman Amid ‘Toxic Culture’ Claims

OpenAI Considers $7 Trillion Round, Thinks It Can Double Revenue by 2025: Reports

Just days after OpenAI announced the formation of its new Safety Committee, former board members Helen Toner and Tasha McCauley publicly accused CEO Sam Altman of prioritizing profits over responsible AI development, hiding key developments from the board, and fostering a toxic environment in the company.

But current OpenAI board members Bret Taylor and Larry Summers fired back today with a robust defense of Altman, countering the accusations and saying Toner and McCauley are trying to reopen a closed case. The argument unfolded in a pair of op-eds published in The Economist.

The former board members fired first, arguing that the OpenAI board was unable to reign in its chief executive.

“Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its CEO,” Toner and McCauley—who played a role in Altman’s ouster last year—wrote on May 26. “In OpenAI’s specific case, given the board’s duty to provide independent oversight and protect the company’s public-interest mission, we stand by the board’s action.”

In their published response, Bret Taylor and Larry Summers—who joined OpenAI after Toner and McCauley left the company— defended Altman, dismissing the claims and asserting his commitment to safety and governance.

“We do not accept the claims made by Ms. Toner and Ms. McCauley regarding events at OpenAI,” they wrote. “We regret that Ms. Toner continues to revisit issues that were thoroughly examined by the WilmerHale-led review rather than moving forward.”

While Toner and McCauley did not cite the company’s new Safety and Security Committee, their letter echoed concerns that OpenAI could not credibly police itself and its CEO.

“Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” they wrote. “We also feel that developments since he returned to the company—including his reinstatement to the board and the departure of senior safety-focused talent—bode ill for the OpenAI experiment in self-governance.”

The former board members said “long-standing patterns of behavior” by Altman left the company board unable to properly oversee “key decisions and internal safety protocols.” Altman’s current colleagues, however, pointed to the conclusions of an independent review of the conflict commissioned by the company.

“The review’s findings rejected the idea that any kind of AI safety concern necessitated Mr. Altman’s replacement,” they wrote, “in fact, WilmerHale found that the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Perhaps more troubling, Toner and McCauley also accused Altman of fostering a poisonous company culture.

“Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr. Altman cultivated ‘a toxic culture of lying’ and engaged in ‘behavior [that] can be characterized as psychological abuse.”

But Taylor and Summers refuted their claims, saying that Altman is held in high esteem by his employees.

“In six months of nearly daily contact with the company, we have found Mr. Altman highly forthcoming on all relevant issues and consistently collegial with his management team,” they said.

Taylor and Summers also said Altman was committed to working with the government to mitigate the risks of AI development.

The public back-and-forth comes amid a turbulent era for OpenAI that started with his shortlived ouster. Just this month, its former head of alignment joined rival company Antropic after leveling similar accusations against Altman. It had to walk back a voice model strikingly similar to that of actress Scarlett Johansson after failing to secure her consent. The company dismantled its superalignment team, and it was revealed that abusive NDAs prevented former employees from criticizing the company.

OpenAI has also secured deals with the Department of Defense to use GPT technology for military applications. Major OpenAI investor Microsoft, meanwhile, has also reportedly made similar arrangements involving ChatGPT.

The claims shared by Toner and McCauley seem consistent with statements shared by former OpenAI researchers who left the company Jan Leike  saying that “over the past years, safety culture and processes [at OpenAI] have taken a backseat to shiny products” and that his alignment team was “sailing against the wind.”

Taylor and Summers partially addressed these concerns in their column by citing the new safety committee and its responsibility “to make recommendations to the full board on matters pertaining to critical security and safety decisions for all OpenAI projects.”

Toner has recently escalated her claims regarding Altman’s lack of transparency.

“To give a sense of the sort of thing I’m talking about, when ChatGPT came out November 2022, the board was not informed in advance,” she revealed on The TED AI Show podcast earlier this week. “We learned about ChatGPT on Twitter.”

She also said the OpenAI board didn’t know Altman owned the OpenAI Startup Fund, despite his claims of a lack of financial stake in OpenAI. The fund invested millions raised from partners like Microsoft in other businesses, without the board’s knowledge. Altman’s ownership of the fund was terminated in April.

OpenAI did not respond to a request for comment from Decrypt.

Edited by Ryan Ozawa.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source: https://decrypt.co/233372/openai-toxic-culture-former-current-board-members-argue

Leave a Reply

Your email address will not be published. Required fields are marked *