In May, hundreds of prominent figures around the world in the field of artificial intelligence signed a letter stating that AI could soon lead to the destruction of humanity. “The top priority is to eliminate any risk of AI causing human extinction. This mission must be weighed alongside other social issues such as pandemics and nuclear war,” the letter stated.
According to the New York Times, the letter was written by the Center for A.I. Safety and the Future of Life Institute, featuring several key figures in the tech industry, including OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Geoffrey Hinton, often referred to as the “godfather of AI,” who previously worked at Google.
Experts have repeatedly warned about the consequences of artificial intelligence. (Photo: ThinkStock).
The letter reveals that AI experts are deeply concerned about this technology. They continuously warn humanity about the catastrophic consequences that artificial intelligence may bring, as noted by the news outlet.
The Future of AI: Capable of Human-Like Actions and Thinking
Although current AI systems do not yet have the capability to annihilate humanity, many are still worried about the future as AI becomes increasingly advanced, potentially exceeding human control.
AI could perform tasks that humans do not request. By the time humans attempt to intervene or deactivate AI, it may even have the ability to resist or replicate itself to continue functioning.
“Current AI systems cannot wipe out humanity. But in 1, 2, or 5 years, nothing is certain. This is the problem. We do not know when disaster will strike,” said Yoshua Bengio, a professor at the University of Montreal.
The automation capabilities of AI are increasing, raising the risk of them replacing humans. (Photo: New York Times).
A scenario where AI becomes uncontrollable could emerge when users request machines to create as many paperclips as possible, but then those machines lose control and turn everything, including humans, into resources for their paperclip factories.
Experts question whether this scenario could manifest in the real world. Currently, companies are continuously updating AI automation features, connecting them to infrastructures like power grids, stock markets, or even military weapons. Thus, the risks posed by artificial intelligence causing negative impacts are entirely plausible.
Some experts noted that at the end of 2022, when the ChatGPT craze spread, it was the moment they felt most concerned about the potential worst-case scenarios. “AI will gradually improve and become more autonomous, and the more autonomous they become, the more capable they are of self-regulation and thinking just like humans,” shared Anthony Aguirre, founder of the Future of Life Institute.
The Risk of AI Monopolizing the World
At some point, the operators of society and the economy could be colossal machines, not humans, and humans may have no way to deactivate them.
According to the New York Times, researchers are transforming chatbots like ChatGPT into systems capable of performing tasks based on user-provided text, such as AutoGPT.
Currently, AI systems are not operating smoothly. (Photo: Independent).
Systems like AutoGPT can autonomously generate computer programs. Users only need to grant access to servers, and the chatbot can operate them and perform tasks on online platforms, from retrieving information to creating applications and updating them.
The limitation of these AI systems is that they are not yet operating smoothly and can easily get caught in loops without being able to self-replicate.
However, these shortcomings are expected to be addressed in the near future. “Humans are trying to create systems capable of self-improvement. They may not be able to do that now, but they will in the future. We cannot know when that day will come,” remarked Connor Leahy, founder of Conjecture.
As researchers, companies, and criminals assign the task to AI to “make money,” it can infiltrate banking systems, incite criminal behavior, and self-replicate when someone tries to deactivate it. Therefore, many experts are concerned that as AI becomes more advanced and is trained on vast amounts of data, it will exhibit more deviant behaviors.