Bill

Bill > SR121


NJ SR121

NJ SR121
Urges generative artificial intelligence companies to make voluntary commitments regarding employee whistleblower protections.


summary

Introduced
01/14/2025
In Committee
01/14/2025
Crossed Over
Passed
Dead
01/12/2026

Introduced Session

2024-2025 Regular Session

Bill Summary

This resolution urges generative artificial intelligence companies to make voluntary commitments to protect employees who raise risk-related concerns. Artificial intelligence technology has the potential to provide unprecedented benefits to humanity but also poses serious risks, such as perpetuating inequalities, manipulation and misinformation, and the potential loss of control of autonomous artificial intelligence systems. Many risks associated with artificial intelligence are currently unregulated, and existing whistleblower protections are inadequate to protect employees from retaliation for publicly disclosing concerns. In the absence of government oversight, employees of artificial intelligence companies are among the few individuals capable of holding the companies accountable. However, broad confidentiality agreements prevent employees from voicing concerns beyond the company failing to address the issues. Additionally, independent evaluation is critical to identifying the risk posed by artificial intelligence systems, but is stymied by the lack of both legal and technical safe harbor, without which evaluators face legal reprisal or account suspension or termination. Legal safe harbor protects evaluators from legal reprisal, and technical safe harbor protects evaluators from account suspension or termination. The resolution urges generative artificial intelligence companies to make voluntary commitments to mitigate the risks of artificial intelligence by adhering to the following principles: (1) The company will not enter into or enforce any agreement prohibiting disparagement or criticism of the company for risk-related concerns or retaliate for risk-related criticism by hindering any vested economic benefit; (2) The company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company board, to regulators, and to an appropriate independent organization with relevant expertise; (3) The company will support a culture of open criticism and allow current and former employees to raise risk-related concerns about its technologies to the public, to the company board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected; (4) The company will provide legal and technical safe harbor for good faith system evaluation, ensuring safety from legal reprisal, account suspension, or termination, while maintaining the protection of trade secrets and other intellectual property. Safe harbor should enable independent identification of all forms of risks posed by artificial intelligence systems; (5) The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. (6) Current and former employees should retain the freedom to publicly report concerns until the creation of an adequate process for anonymously raising concerns. Efforts to report risk-related concerns should avoid releasing confidential information unnecessarily.

AI Summary

This resolution urges generative artificial intelligence (AI) companies to voluntarily adopt six key principles to protect employees who want to raise concerns about potential risks associated with AI technology. The principles include: (1) preventing companies from prohibiting or retaliating against employees who criticize the company for risk-related concerns, (2) establishing an anonymous process for employees to report risks to company boards, regulators, and independent experts, (3) supporting a culture of open criticism while protecting trade secrets, (4) providing legal and technical "safe harbor" that protects employees conducting good faith evaluations of AI systems from reprisal or account termination, (5) not retaliating against employees who publicly share confidential risk-related information after internal processes fail, and (6) ensuring employees can publicly report concerns until an appropriate anonymous reporting mechanism is created. The resolution acknowledges that AI technology offers significant potential benefits but also poses serious risks like perpetuating inequalities and spreading misinformation, and recognizes that employees are often best positioned to identify and expose these risks. The resolution calls for copies to be sent to leading AI companies such as OpenAI, Anthropic, Google, and others, encouraging them to voluntarily adopt these employee protection principles.

Committee Categories

Labor and Employment

Sponsors (2)

Last Action

Introduced in the Senate, Referred to Senate Labor Committee (on 01/14/2025)

bill text


bill summary

Loading...

bill summary

Loading...

bill summary

Loading...