News

Former OpenAI Safety Researcher Says ‘Security Was Not Prioritized’

Former OpenAI Safety Researcher Says ‘Security Was Not Prioritized’

Former OpenAI safety researcher Leopold Aschenbrenner says that security practices at the company were “egregiously insufficient.” In a video interview with Dwarkesh Patel posted Tuesday, Aschenbrenner spoke of internal conflicts over priorities, suggesting a shift in focus towards rapid growth and deployment of AI models at the expense of safety.

He also said he was fired for putting his concerns in writing.

In a wide-ranging, four-hour conversation, Aschenbrenner told Patel that he penned an internal memo last year detailing his concerns and circulated it among reputable experts from outside the company. However, after a major security incident occurred weeks later, he said he decided to share an updated memo with a couple of board members. He was quickly released from OpenAI.

“What might also be helpful context is the kinds of questions they asked me when they fired me… the questions were about my views on AI progress, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I was up to during the OpenAI board events,” Aschenbrenner said.

AGI, or artificial general intelligence, is when AI meets or exceeds human intelligence across any field, regardless of how it was trained.

Loyalty to the company—or to Sam Altman—emerged as a key factor after his brief ouster: over 90% of employees signed a letter threatening to quit in solidarity with him. They also popularized the slogan, “OpenAI is nothing without its people.”

“I didn’t sign the employee letter during the board events, despite pressure to do so,” Aschenbrenner recalled.

The superalignment team—led by Ilya Sutskever and Jan Leike—was in charge of building long-term safety practices to make sure AI remains aligned with human expectations.The departure of prominent members of that team, including Sutskever and Leike, brought added scrutiny. The whole team was subsequently dissolved, and a new safety team was announced… led by CEO Sam Altman, who is also a member of the OpenAI board to which it reports.

Aschenbrenner said OpenAI’s actions contradict its public statements about safety.

“Another example is when I raised security issues—they would tell me security is our number one priority,” he stated. “Invariably, when it came time to invest serious resources or make trade-offs to take basic measures, security was not prioritized.”

This is in line with statements from Leike, who said the team was “sailing against the wind” and that “safety culture and processes have taken a backseat to shiny products” under Altman’s leadership.

Aschenbrenner also expressed concerns about AGI development, stressing the importance of a cautious approach—particularly as many fear China is pushing hard to surpass the United States in AGI research.

China “is going to have an all-out effort to infiltrate American AI labs, billions of dollars, thousands of people… [they’re] going to try and outbuild us,” he said. “What will be at stake will not just be cool products, but whether liberal democracy survives.”

Just a few weeks ago, it was revealed that OpenAI required its employees to sign abusive non-disclosure agreements (NDAs) that prevented them from speaking out about the company’s safety practices.

Aschenbrenner said he didn’t sign such an NDA, but said that he was offered around $1M in equity.

In response to these growing concerns, a collective of nearly a dozen current and former OpenAI employees have meanwhile signed an open letter demanding the right to call out company misdeeds without fear of retaliation.

The letter—endorsed by industry figures like Yoshua Bengio, Geoffrey Hinton, and Stuart Russell—emphasizes the need for AI companies to commit to transparency and accountability.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public—yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the letter reads. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.

“Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry,” it continues. “We are not the first to encounter or speak about these issues.”

After news of the restrictive employment clauses spread, Sam Altman claimed he was unaware of the situation and assured the public his legal team was working to fix the issue.

“There was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication,” he tweeted. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”

in regards to recent stuff about how openai handles equity:

we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was…

— Sam Altman (@sama) May 18, 2024

OpenAI says it has since released all employees from the contentious non-disparagement agreements and removed the clause from its departure paperwork.

OpenAI did not respond to a request for comment from Decrypt.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.





Source: https://decrypt.co/234079/openai-safety-security-china-leopold-aschenbrenner

Leave a Reply

Your email address will not be published. Required fields are marked *