One of the pioneering scientists in the field of artificial intelligence warns that it is time to be concerned about the dangers of AI.
In an interview with CBS News, British computer scientist Geoffrey Hinton shared his thoughts on the current state of AI development, which he describes as being at a “critical moment.”
The emergence of Artificial General Intelligence (AGI) is approaching faster than we might imagine.
Concerns About AGI
“Until recently, I thought it would be 20 to 50 years before we had AI for general purposes. Now I think it could be 20 years or less,” Hinton stated.
Geoff Hinton has made significant contributions to the field of neural networks. (Photo: Wired).
Geoffrey Hinton is often referred to as the “godfather of artificial intelligence.” His groundbreaking work on neural networks has disrupted traditional models by mimicking human cognitive processes and laid the foundation for today’s machine learning models.
AGI is a term that describes a potential form of AI that could achieve human-level intelligence or beyond. Rather than being trained in specific areas, AGI has the ability to learn autonomously, improve itself, and understand and solve new situations.
Currently, speculations about AGI are often used to enhance the capabilities of existing operational models.
Despite much speculation and hype, as well as the fact that it may take a long time before AGI truly emerges, Hinton believes it is crucial to consider the potential risks of this technology, including the possibility of it being used to cause harm or even annihilate humanity.
“That is not an impossible scenario,” Hinton told CBS.
Immediate Risks
However, Hinton argues that the pressing issue with AI today (regardless of whether AGI is involved) is how to prevent nations and corporations from monopolizing power.
Current AIs lack their own perspectives, merely attempting to reconcile opposing information present in their training data. (Photo: Shutterstock).
“I think it is very reasonable for people to be concerned about these issues right now, even though they may not occur in the next year or two. People should think about it,” he remarked.
Fortunately, in Hinton’s view, humanity still has a chance to act before things spiral completely out of control, as current models are still “stupid.”
“We are currently at a transitional moment, where ChatGPT is a type of stupid savant, and it doesn’t really understand the truth,” Hinton said to CBS.
He believes that this chatbot merely attempts to reconcile differing and opposing opinions in its training data. “That is very different from a person trying to have a consistent worldview,” Hinton added.
However, Hinton predicts that we will move towards systems capable of understanding worldviews in various ways. This prospect is frightening, as anyone can inject their own perspective into AI.
“You wouldn’t want some gigantic profit-driven companies deciding what the truth is,” the father of AI warned.