/  Technology   /  AI Safety First: OpenAI and Anthropic to Undergo Government Testing Before US Launch
OpenAI Reports Surge in ChatGPT Usage, Reaching 200 million Weekly Users

AI Safety First: OpenAI and Anthropic to Undergo Government Testing Before US Launch

AI Safety First: OpenAI and Anthropic to Undergo Government Testing Before US Launch

In a groundbreaking move for the tech industry, OpenAI and Anthropic have agreed to subject their upcoming AI models to rigorous safety evaluations before they are released to the public. This new measure, forged in partnership with the US AI Safety Institute, marks the first time tech companies have voluntarily subjected their AI systems to inspection by a government body.

As part of this agreement, the US AI Safety Institute will gain access to major new AI models from both OpenAI and Anthropic, both before and after their public release. This collaboration aims to advance research on evaluating AI capabilities and safety risks, and develop strategies to mitigate these risks. The Institute, which is part of the National Institute of Standards and Technology (NIST) under the US Department of Commerce, was established following an executive order from President Joe Biden in October 2023 that mandated safety assessments of AI models.

The UK AI Safety Institute will also be involved, providing feedback to both companies to enhance model safety in conjunction with their American counterparts.

Industry Reactions

Sam Altman, CEO of OpenAI, expressed enthusiasm for the initiative: “We are pleased to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. We fully support their mission and look forward to collaborating on setting safety best practices and standards.”

Jack Clark, co-founder of Anthropic, highlighted the significance of the collaboration: “Working with the US AI Safety Institute allows us to leverage their expertise to rigorously test our models before they are widely deployed. This partnership strengthens our capacity to identify and address potential risks, advancing our commitment to responsible AI development.”

Global Implications

This development could pave the way for other countries, such as India, to introduce similar safety evaluations for AI models. Earlier this year, the Indian government sparked controversy by issuing an advisory requiring government approval for untested or unreliable AI models before they can be released publicly. The advisory was later clarified to allow the release of such models if they are properly labeled for potential unreliability.

In the United States, California lawmakers have also taken steps toward regulating AI safety. The proposed AI bill SB 1047, which is still awaiting final approval from Governor Gavin Newsom, mandates safety tests for AI models of significant cost or computing power. While some tech companies have opposed the bill, citing potential hindrances to innovation, it reflects a growing trend towards stricter AI safety regulation

Leave a comment