OpenAI, a company owned by Microsoft, issued a set of guidelines for all customers on Monday, shortly after Sam Altman took over as CEO. The guidelines cautioned against the “catastrophic risks” associated with artificial intelligence.
OpenAI, the company that creates ChatGPT, has released new rules for estimating the “catastrophic risks” associated with artificial intelligence in models that are still in development. The “Preparedness Framework” document discusses how studies that attempt to assess the risks associated with AI have fallen short.
According to the most recent guidelines released by OpenAI, “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.” Furthermore, according to the standards, the framework should “help address this gap.”
In an attempt to assess all the hazards associated with the new technology, a monitoring and evaluations team announced in October would concentrate on “frontier models” that are presently under development and have skills greater than the most sophisticated AI software.
In addition, the evaluation team will rate every new model in four primary areas, ranging from “low” to “critical,” based on the evaluation. The framework states that only models with a risk score of “medium” or lower are eligible for deployment.
AI models come into four types of risk:
The first area focuses on cybersecurity and the model’s capacity for significant cyberattacks.
The second will evaluate the software’s potential for contributing to the development of a chemical mixture, an organism (like a virus), or a nuclear weapon-all of which might be dangerous to people.
The third category focuses on the model’s ability to persuade people, including how much it can change how people behave.
The model’s potential autonomy-that is, its ability to function independently of the programmers who designed it-represents the fourth and final category of risk.
The outcomes of this practice will thereafter be forwarded to OpenAI’s Safety Advisory Group, which will subsequently advise CEO Sam Altman or another board member as needed.
Topics #AI #Artificial Intellegence #catastrophic risks #ChatGPT #Guidelines #news #Nuclear #Open AI #risk #Sam Altman