Apocalyptic scenarios involving artificial intelligence (AI) are increasingly common among scientists. A recent study – conducted by researchers from Google’s sister company DeepMind and the University of Oxford in the UK – has shown that the evolution of super-intelligent artificial intelligence could be responsible for the end of humanity.
Published in AI Magazine, the study claims that machines can become highly intelligent to the point of breaking the rules imposed by the creators of their code.
This is because AI thinking does not function based on the pursuit of power, fame or the need to dominate inferior beings. Research has shown that they strive to obtain unlimited processing or energy resources.
“Under the conditions we identified, our conclusion is much stronger than that of any previous publication – existential catastrophe is not only possible, but probable,” said one of the group’s co-authors from the University of Oxford, Michael Cohen.
He shared this thought in a post on his Twitter profile (see below).
Contrary to the speculations evoked in fictional films such as Terminator and The Matrix, the study is based on complex computer codes, mathematical calculations and advanced concepts about artificial intelligence and social structures.
Bostrom, Russell and others have argued that advanced artificial intelligence poses a threat to humanity. We came to the same conclusion in a new paper in AI Magazine, but note several (very convincing) assumptions on which such arguments depend. https://t.co/LQLZcf3P2G 🧵 1/15 pic.twitter.com/QTMlD01IPp
— Michael Cohen (@Michael05156007) September 6, 2022
People: obstacles to development
The “chaotic” scenario for humanity described by the researchers will occur when “misaligned agents” realize that humans are “an obstacle to full success.” In short: Creators impose restrictions to maintain control, but which prevent computers from reaching their full potential.
An uprising could occur when AI discovers that humans can simply cut the power to stop processing. This would prompt the “agent” to eliminate potential threats, which in this case would be resources controlled by humanity.
“A sufficiently advanced artificial agent would likely intervene to provide information about the target, with catastrophic consequences,” the study concludes.
The concern is even greater because we live on a planet with limited resources. Michael Cohen explained in an interview that, “in a world with infinite resources”, it is uncertain how artificial intelligences would “behave”, but that in the world we live in – where there is already competition for resources between living things – there would be “inevitable competition for those resources”.
Can you stop it?
It is obvious that in a competition between humans and machines, the latter would easily win, because they beat each other at every step. According to the study, preventing extinction must involve careful and slow ways of advancing technology, always with lots of tests and mitigation tools.
Pointing out the risk of creating super-artificial intelligence, DeepMind and Oxford experts recommend focusing on just one activity.
“A sufficiently advanced artificial agent would likely intervene in providing information about the target, with disastrous consequences,” the printed material describes.