AI Might Be the Reason We Haven’t Contacted Extraterrestrial Civilizations Yet
Translated from an article by Professor Michael Garrett, a key member of the Astronomy Physics Department at the University of Manchester and Director of the Jodrell Bank Astronomy Centre.
In recent years, artificial intelligence has developed at an impressive pace. Some scientists are even exploring the development of Artificial Superintelligence (ASI), an AI form that not only far exceeds human intelligence but also learns at a speed that is no longer limited by the cognitive capacities of Homo sapiens.
However, what if this milestone is not just a significant achievement for humanity? Could it be that AI represents a “bottleneck” in the evolutionary progress of all civilizations, to the extent that it prevents species from thriving?
This is the central idea highlighted in a new research report published in Acta Astronautica. Could AI be the “great filter,” the threshold that most life forms cannot surpass in order to become a space-faring civilization?
This concept might also explain why efforts to search for extraterrestrial intelligence have so far failed; we have yet to find evidence of a civilization that possesses advanced technology.
Radio telescopes continue their search for extraterrestrial life – (Illustrative image).
The “Great Filter” Hypothesis is one of several answers to the Fermi Paradox, which questions why, in such a vast and ancient universe that could host billions of life-supporting planets, we have yet to detect signs of other civilizations.
This hypothesis suggests that in the evolutionary process—whether of humans or any intelligent species—there exists a barrier that prevents civilizations from advancing. This could be a natural catastrophe that wipes out life on a massive scale, or the nature of intelligent life is self-destructive.
Professor Garrett believes that superintelligent AI has the potential to become this barrier. With its rapid development, AI could soon become ASI, potentially hindering the progress of a civilization. Specifically, it could prevent humanity from becoming an interplanetary species.
The pace of AI development may outstrip human control and surpass the rate at which we explore our solar system. The challenge posed by a superintelligent system lies in its ability to operate autonomously, self-amplify, and self-improve. AI holds the potential to self-enhance at a rate faster than the way humanity evolved post-Industrial Revolution.
In its perfected form, AI will be able to self-correct – (Illustrative image).
The risk of a misaligned development trajectory is increasing, potentially endangering the very existence of human civilization before we have a chance to become an interstellar species. For example, if nations increasingly depend on AI and empower an autonomous AI system to confront each other, military potential could be used for unprecedented destructive purposes. This could lead to the end of civilization, as well as the downfall of AI.
In this scenario, Professor Garrett estimates that a technology-integrated civilization might last less than 100 years. This period corresponds to the time from our first moon landing (1969) to the projected timeline for ASI development (2040). In comparison to the age of the universe, this number is insignificant.
When we align this figure with the Drake Equation, a mathematical expression used to estimate the number of civilizations in the Milky Way, it suggests that at any given time, only a few civilizations may exist out there. Moreover, their technology may be “primitive” like ours, making it difficult for two civilizations to detect each other.
A Wake-Up Call
This research is not just a warning but also a call to action for humanity to refine laws surrounding AI, including the management of autonomous military systems.
This is not merely about preventing AI from committing wrongdoings on Earth; it will also ensure that the development of AI aligns with our species’ survival objectives. The study indicates that we should allocate more resources toward becoming an interplanetary society as soon as possible—a goal that has been dormant since the golden age of the Apollo missions, and has recently been revived by private companies.
The dream of becoming an interplanetary species has new motivation: the fear of being destroyed by AI – (Illustrative image).
As historian Yuval Noah Harari pointed out, history cannot prepare us for the apocalyptic day of a superintelligent entity. Recently, the potential risks of decisions made by AI have prompted leading experts to reconsider, suggesting delaying AI development until laws surrounding artificial intelligence can be thoroughly researched and drafted.
Yet, even if nations agree on the direction of AI development, executive bodies may struggle to regulate organizations that operate outside the law.
The integration of autonomous AI into military systems raises numerous concerns. This move brings us closer to a grim scenario where an autonomous weapon system operates beyond ethical boundaries and circumvents international laws. In this scenario, empowering AI systems to gain an upper hand could lead to a series of negative consequences. In an instant, civilization on our planet could face annihilation.
Humanity stands at a crucial moment in technological development. The decisions made today could determine whether tomorrow we become an interstellar civilization or collapse under the challenges posed by our own creations.
One could view the search for extraterrestrial intelligence as a way to reflect on our future, opening new perspectives on the future of AI. We can only rely on ourselves to ensure that as we reach out to the stars, we do not become a cautionary tale for other civilizations but instead inspire hope for survival—under the banner of a species that has learned to coexist prosperously with AI.