OpenAI releases safety guidance: Board of directors has the power to prevent CEO from releasing new models
On December 19th, according to official sources, OpenAI released a security guide called the "Preparedness Framework" on its official website, which outlines the process of "tracking, evaluating, predicting, and preventing catastrophic risks posed by increasingly powerful models."
OpenAI stated that it is currently using a test version of the "Preparedness Framework," and the "Preparedness Team" will be dedicated to ensuring the safety of cutting-edge artificial intelligence models. The "Preparedness Team" will continuously evaluate AI systems to understand their performance in four different risk categories. OpenAI is monitoring so-called "catastrophic" risks, which are defined in this guide as "any risk that could result in economic losses of trillions of dollars or cause serious injury or death to many people."
According to the security guide, leadership can decide whether to release new AI models based on these reports, but the board has the power to overturn their decision.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
South Korean prosecutors seek 6 months for lawmaker who allegedly hid crypto
Ethena Labs partners with Trump’s World Liberty Financial
AI could spur tourism in G7 nations — OECD
The Graph & Theta Are Good, but Web3Bay is the Game-Changer in Web3