OpenAI had employees sign employee agreements that required them to waive their federal rights to whistleblower compensation, the letter said. Those agreements also required OpenAI employees to obtain prior approval from the company if they wanted to release information to federal authorities. OpenAI did not make any exceptions in its non-disparagement clauses for employees to disclose securities violations to the SEC.
These overly broad agreements violated long-standing federal laws and regulations meant to protect whistleblowers who want to disclose damaging information about their companies anonymously and without fear of reprisal, the letter said.
“These contracts sent a message that ‘we don’t want … employees talking to federal regulators,’” said one of the whistleblowers, who asked not to be identified for fear of reprisal. “I don’t think AI companies can build technology that is safe and in the public interest if they protect themselves from scrutiny and dissent.”
GET CAUGHT
Stories to keep you informed
In a statement, Hannah Wong, a spokesperson for OpenAI, said: “Our whistleblowing policy protects the rights of employees to make protected disclosures. Additionally, we believe that thorough debate about this technology is essential, and have already made significant changes to our exit process to remove non-disparagement language.”
The whistleblower letter comes amid concerns that OpenAI, which started as a The nonprofit with an altruistic mission puts profit ahead of security in creating its technology. The Post reported Friday that OpenAI rushed out its latest AI model that powers ChatGPT to meet a May release date set by company executives, despite concerns from employees that the company “failed” to meet its own security testing protocol that it said would protect its AI from catastrophic harm, such as teaching users to build biological weapons or helping hackers develop new types of cyberattacks. In a statement, OpenAI spokesperson Lindsey Held said the company “has not compromised on our security process, while we recognize the launch was stressful for our teams.”
Tech companies’ strict nondisclosure agreements have long rankled employees and regulators. During the #MeToo movement and national protests following the killing of George Floyd, employees warned that such legal agreements limited their ability to report sexual misconduct or racial discrimination. Regulators, meanwhile, worry that the terms silence tech workers who might alert them to misconduct in the opaque industry. technology sector, largely due to allegations that companies’ algorithms promote content that undermines elections, public health and child safety.
The rapid advancement of artificial intelligence has Policymakers’ concerns about the tech industry’s power have sparked a flurry of calls for regulation. In the United States, AI companies largely operate in a legal vacuum, and policymakers say they can’t create effective new AI policies without the help of whistleblowers, who can help explain the potential threats posed by the rapidly changing technology.
“OpenAI’s policies and practices appear to have a chilling effect on whistleblowers’ right to come forward and receive appropriate compensation for their protected disclosures,” Sen. Chuck Grassley (R-Iowa) said in a statement to The Post. “To ensure the federal government stays ahead of the curve on artificial intelligence, OpenAI’s nondisclosure agreements must change.”
A copy of the letter, addressed to SEC Chairman Gary Gensler, was sent to Congress. The Post obtained the whistleblower letter from Grassley’s office.
The official complaints referred to in the letter were filed with the SEC in June. Stephen Kohn, an attorney representing the OpenAI whistleblowers, said the SEC has responded to the complaint.
It could not be determined whether the SEC has opened an investigation. The agency did not respond to a request for comment.
The SEC should take “swift and aggressive” steps to address these illegal agreements, the letter said, because they could be relevant to the broader AI sector and could violate the White House’s October executive order requiring AI companies to develop the technology safely.
“At the heart of any such enforcement effort is the recognition that insiders … must be free to report concerns to federal authorities,” the letter said. “Employees are best positioned to detect and warn about the types of dangers referenced in the Executive Order, and are also best positioned to help ensure that AI benefits humanity, rather than having the opposite effect.”
These agreements threatened employees with criminal prosecution if they reported violations of the law to federal authorities under trade secret laws, Kohn said. Employees were ordered to keep company information confidential and threatened with “severe sanctions” without acknowledging their right to report such information to the government, he said.
“We’re just at the beginning in terms of AI oversight,” Kohn said. “We need employees to step up and we need OpenAI to be open.”
The SEC should require OpenAI to produce every employment agreement, severance agreement and investor agreement with nondisclosure clauses to ensure it doesn’t violate federal law, the letter said. Federal regulators should require OpenAI to notify all former and current employees of the company’s violations and also inform them that they have the right to confidentially and anonymously report violations of law to the SEC. The SEC should impose fines on OpenAI for “each improper agreement” under the SEC law and order OpenAI to remedy the “chilling effect” of its past practices, the whistleblower letter said.
Several tech workers, including Facebook whistleblower Frances Haugen, have filed complaints with the SEC, which established a whistleblower program after the 2008 financial crisis.
It’s been a long battle to fight back against Silicon Valley’s use of NDAs to “monopolize information,” said Chris Baker, a San Francisco attorney. He won a $27 million settlement in December for Google employees over claims that the tech giant used onerous nondisclosure agreements to block whistleblowing and other protected activities. Now tech companies are increasingly fighting back with clever ways to discourage speech, he said.
“Employers have learned that the cost of a breach can sometimes be much higher than the cost of a lawsuit, so they are willing to take the risk,” Baker said.