Leading artificial intelligence executives including OpenAI CEO Sam Altman have published a lone sentence saying "mitigating the risk of extinction from AI should be a global priority," akin to nuclear war or pandemics.
A collection of AI researchers, executives, experts, and other personalities put their names to a single-sentence statement published on Tuesday by the Center for AI Safety (CAIS) umbrella group.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said, in its entirety.
The preamble for the statement was more than twice as long as the main event. It said that various people were "increasingly discussing a broad spectrum of important and urgent risks from AI."
"Even so, it can be difficult to voice concerns about some of AI's most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion," the group said. "It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously."
No comments:
Post a Comment