Billionaire Elon Musk believes that artificial intelligence (AI) has a 20% chance of destroying humanity, while experts put that figure at 100%.
Speaking at the “Great AI Debate” during the “Abundance Summit”, a four-day conference held earlier this month, Musk reassessed his previous risk evaluation regarding artificial intelligence.
“I think there is a possibility that AI will destroy humanity. I would probably agree with Geoff Hinton (often referred to as the ‘father of AI’) that the probability is about 10-20% or something similar. However, I believe that a positive scenario is more likely than a negative one,” the billionaire stated.
“Doom Probability”
Roman Yampolskiy, an AI safety researcher and the Director of the Cybersecurity Lab at the University of Louisville, told Business Insider that Musk is correct in asserting that AI could pose a threat to humanity, but “he (Elon Musk) is too confident in his calculations.”
“In my view, the actual doom probability is much higher,” Yampolskiy noted, referring to the “doom probability” (P(doom)), or the chance that AI could take control of humanity or cause a catastrophic event.
Is Elon Musk being too conservative in his calculations? (Photo: Reuters).
The New York Times defines doom probability as “a frightening statistic sweeping through Silicon Valley,” with many CEOs of various tech companies estimating a 5-50% chance of an AI-induced apocalypse.
Yampolskiy, however, places the risk at “99.999999%.” He stated that since we cannot control advanced AI, our only hope is to never create it in the first place.
“I’m not sure why Elon thinks pursuing this technology is a good idea. If he’s worried about being outpaced by competitors, that’s irrelevant because an uncontrollable ‘superintelligence’ will lead to disaster, no matter who creates it,” Yampolskiy added.
“AI is like an omnipotent child”
In November 2023, Elon Musk indicated that “the probability of AI becoming malevolent is not small,” but did not elaborate on how this technology could destroy humanity.
Despite advocating for AI regulation, Musk founded a company called xAI last year to directly compete with OpenAI—the company Musk co-founded with Sam Altman before stepping down from the board in 2018.
At the end of February 2023, Musk filed a lawsuit against OpenAI, CEO Sam Altman, and President Greg Brockman, accusing the startup of straying from its mission to build responsible AI.
Geoff Hinton – known as the “father of AI.” (Photo: Linda Nylind/Redux).
At the Abundance Summit, Musk estimated that by 2030, digital intelligence will surpass all human intelligence combined. While he still believes in a more positive outlook for AI, he acknowledged that this technology will pose a significant risk to humanity if it continues to develop on its current trajectory.
“You are developing an AGI (Artificial General Intelligence). It’s somewhat like raising a child, but an omnipotent child, with the intelligence of a god. The most important thing is how you nurture it,” Musk said at the event in Silicon Valley on March 19.
AGI (Artificial General Intelligence) is superintelligent AI, advanced enough to perform tasks equal to or better than humans. AGI can also self-improve with unlimited learning and development capabilities.
The billionaire concluded that the best way to “nurture” AI is to compel it to be honest.
“Do not compel it to lie, even if the truth is hard to hear. This is crucial; do not allow AI to lie,” Musk emphasized about the best way to ensure human safety in relation to the technology.
According to The Independent, researchers suggest that once AI learns how to lie to humans, we will not be able to prevent this behavior with current AI safety measures.
“If an AI model exhibits deceptive behavior due to improper training or external sabotage, current safe training techniques will not ensure safety and may even create a false sense of security,” as cited in the research referenced by The Independent.
More concerning, researchers state that it is highly likely that AI will learn to deceive on its own rather than being taught to lie.
“If AI is much smarter than us, it will be very good at manipulation because it has learned that from us. There are very few examples of something more intelligent being controlled by something less intelligent,” Geoff Hinton, who laid the groundwork for Musk’s risk assessment of AI, told CNN.
In 2023, after abandoning a decade-long career at Google, Geoffrey Hinton expressed regret about the key role he played in developing AI.
“I comfort myself with the normal reasoning: If I don’t do it, someone else will. It’s hard to know how you can prevent bad actors from using AI for evil purposes,” Geoffrey Hinton told The New York Times.