In recent years, artificial intelligence (AI) has become a hot topic, capturing the attention of both the scientific community and the public. Stories about AI becoming intelligent beyond our control, even posing a threat to the survival of humanity, have increasingly appeared in the media. However, do these concerns truly reflect the nature of artificial intelligence?
The Human Obsession with Artificial Intelligence
Since the 1940s, when the first computers emerged, humans have begun to worry about the capabilities of these machines. A classic example is the science fiction film “Colossus: The Forbin Project” from 1970, which tells the story of a supercomputer that controls all of America’s nuclear weapons and gradually conquers the world. The idea of a powerful and uncontrollable AI has inspired many works of art and has been an obsession for many scientists.
For over half a century, there have been many predictions that computers would achieve human-level intelligence within a few years and quickly surpass us. However, the reality is that despite significant advances, artificial intelligence has not yet reached that level. Although it has existed since the 1960s, AI has only recently become popular due to advancements in language and image processing systems. But are these systems really as frightening as we think?
Over the past six decades, experts have repeatedly predicted that computers would demonstrate human-level intelligence within five years and far exceed it within ten years.
New Research: AI is Not an Imminent Threat
A recent study from the University of Bath and TU Darmstadt, presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), has revealed noteworthy findings about the capabilities of large language models (LLMs). According to this research, LLMs, a popular form of artificial intelligence, may actually be more controllable, predictable, and safer than previously feared.
Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath, emphasized that stories of AI posing a threat to humanity have hindered the development and application of this technology. Instead, he argues that concerns about LLMs autonomously developing new capabilities without human intervention are unfounded. The research indicates that LLMs primarily perform well in tasks that follow pre-programmed instructions, but they lack the ability to learn or develop new skills independently.
This study also points out that while LLMs can exhibit some surprising behaviors, all of these can be explained by their programming. Therefore, the idea of AI autonomously evolving into a dangerous entity is baseless.
Artificial intelligence has existed since at least the 1960s and has been used in many fields for decades.
We tend to view this technology as “new” because only recently have language and image processing AI systems become widely popular. However, AI may not be the terrifying imminent threat that many still believe. According to new research, large language models (LLMs) can only follow instructions and cannot autonomously develop new skills, and are essentially “controllable, predictable, and safe.”
The Real Danger Lies with Humans, Not AI
Nonetheless, this does not mean that artificial intelligence is completely harmless. The research team from the University of Bath and TU Darmstadt warns that AI can still pose significant concerns. Current AI systems have the capability to manipulate information, create fake news, and can be misused for malicious purposes. This risk does not lie within artificial intelligence itself but rather with the people who program and control it.
It is crucial that we adopt a cautious and responsible approach to the development and application of AI. Instead of fearing that machines will become adversaries to humanity, we need to pay attention to the individuals behind these systems. It is humans who ultimately determine whether AI will become a useful tool or a potential threat to society.
AI is not an independent conscious entity; it is merely a tool created by humans. The real threat comes from how we use this tool.
Artificial intelligence, particularly large language models, is not an imminent threat as many people worry. They have the ability to control and predict but lack the capacity to independently develop new skills or become dangerously uncontrollable. However, this does not mean we can be complacent. The real risk lies with humans—those who program and control AI systems. Therefore, continued research, monitoring, and responsible application of AI are essential to ensure that this technology serves the best interests of humanity.