ChatGPT creator forms an “independent” safety board with the power to pause AI models
But just how independent is it really?
OpenAI is transforming its Safety and Security Committee into an independent “Board oversight committee” that can hold off on model launches if there are safety issues, as mentioned in a blog post from the company. This shift comes after a recent 90-day review of its safety and security practices, during which the committee suggested establishing the independent board.The committee, led by Zico Kolter and featuring members like Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will get updates from company leaders on safety assessments for major model launches.
Together with the full board, it will oversee these launches and can push back a release if safety concerns arise. Additionally, OpenAI’s entire board of directors will receive regular updates on safety and security issues.However, since the members of OpenAI’s safety committee also sit on the broader board of directors, it’s a bit unclear how independent the committee truly is or how that independence is organized.
On the other hand, OpenAI says it is working with external organizations, such as Los Alamos National Labs – one of the top national labs in the US – to explore how AI can be safely utilized in lab settings for advancing bioscientific research.
Additionally, it recently struck deals with the AI Safety Institutes in the US and UK to work together on researching new AI safety risks and establishing standards for trustworthy AI.
We will explore more opportunities for independent testing of our systems and will lead the push for industry-wide safety standards. For example, we’re already developing new collaborations with third-party safety organizations and non-governmental labs for independent model safety assessments. We are also working with government agencies to advance the science of AI safety.
– OpenAI, September 2024
I believe strict regulations are essential for AI, especially regarding how companies train their models, so it is encouraging to see safety boards becoming more common. For example, Meta also has its Oversight Board, which reviews content policy decisions and issues rulings that Meta must adhere to. However, I think it would be better if safety boards were made up of people with no ties to the companies they oversee. What’s your take on this?
Source link