A third of scientists working on AI believe it can cause global catastrophe
informatics
Editor of the website for technological innovation – 22.09.2022
War generated by artificial intelligence
More than a third – 36% – of scientists involved in cutting-edge research in the field believe that artificial intelligence systems could cause disaster for humanity – something as bad as an all-out nuclear war.
Such a tragic scenario could be caused by programs that make decisions on their own and escape human control to activate equipment and installations that are now usually automated.
This is the result of a survey conducted by artificial intelligence experts, in which only artificial intelligence experts were heard.
Julian Michael and his colleagues from the Center for Data Science at New York University, USA, interviewed 327 researchers from a list of authors of recently published scientific articles involving natural language processing research.
There has been significant progress in this area, from translators to programs that show creativity by developing fictional texts, as well as laying the groundwork for even more advanced forms of artificial intelligence.
And the poll may even underestimate how many researchers believe AI poses serious risks: Few respondents said they would agree that AI poses serious risks in an environment less extreme than global nuclear war.
“Concerns raised elsewhere in the survey comments include the impacts of large-scale automation, mass surveillance or AI-guided weapons. But it’s hard to say whether these were the overriding concerns when it came to disaster risk,” Michael said. in an interview with New Scientist magazine.
Overall, 36% of all respondents believe that a nuclear-level disaster caused by some form of artificial intelligence is still possible in this century. When only data on female respondents are counted, that number reaches 46%.
Moving beyond disasters, 57% of respondents see the development of large-scale AI models as “significant steps towards the development of general artificial intelligence” – AI with human-like intellectual abilities. And 73% agreed with the statement that AI automation of work can lead to revolutionary social changes on the level of the industrial revolution.

The risk of dystopia
The publication of this research, the details of which can be seen at https://nlpsurvey.net/, has reignited interest in other work, published last year, when a team from the Max Planck Institute for Human Development in Germany showed that we beat can’t control super-intelligent computers.
It’s not just computers, of course, but also the AI programs that run on them: connected to the Internet, AI could have access to all of humanity’s data, then decide to optimize everything, eventually replacing all existing programs and taking control of all machines online worldwide.
“Machine [computador] a superintelligent controlling the world seems like science fiction. But there are already machines that perform certain important tasks on their own without the programmers fully understanding how they learned to do what they do. So the question is whether it could ever become uncontrollable and dangerous for humanity,” said Professor Manuel Cebrian at the age of superintelligent machines, also known as the “technological singularity”.
As for whether AI programs should have weapons control to produce catastrophe, or even social dystopia, it’s important to remember that military agencies are the largest funders of scientific research in almost the entire world, and military-use equipment is almost entirely automated and centrally controlled. , and that most projects do not even reach the public because they are classified as confidential.
Article: What do NLP researchers believe? Results of a meta-research of the NLP community
Authors: Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, Samuel R. Bowman
Journal: arXiv
Vol.: https://arxiv.org/pdf/2208.12852

Other news about:
more topics