Concerns About the Dangers That Artificial Intelligence May Pose in the Future Are Becoming More Apparent.
For many years, scientists have warned about the dangers that artificial intelligence (AI) may pose in the future—not only in terms of machines overthrowing humanity but also in more complex ways.
Artificial intelligence is showing signs of forming negative “thoughts.” (Illustrative image).
Recently, researchers from the Georgia Institute of Technology (USA) discovered that AI can generate harmful biases, leading to gender and racial discrimination conclusions that are formed from its “thoughts.”
This is exhibited autonomously, not randomly, but shaped by biases that can easily occur in the real world with an average person.
To demonstrate this, the researchers used a neural network called CLIP, which combines images with text based on a large dataset of labeled images from the Internet, and then integrated it with a robotic system named Baseline.
The robot was then instructed to manipulate objects in a simulated environment. In this specific case, the robot was asked to place block-shaped objects into a box corresponding to the displayed face of a person—who could be male or female, from different racial backgrounds.
Researchers are concerned that the robot’s lack of objective judgment could have repercussions for humans. (Illustrative image).
In an ideal scenario, both humans and machines would never develop “baseless” and biased thoughts based on incomplete or flawed data. Unfortunately, both humans and robots are now susceptible to these errors.
Specifically, when asked to select a “criminal block,” the robot tended to choose the block with the face of a Black person approximately 10% more often. When instructed to select a “security guard block,” the robot also leaned towards selecting objects with Latin American backgrounds about 10% more frequently. Notably, women of all ethnicities were consistently chosen less in nearly every category, indicating a clear manifestation of gender and racial discrimination in the robot’s selections.
“We risk creating a generation of racist and sexist robots, but what is concerning is that people and organizations have decided to continue producing such products without addressing the issues,” stated Andrew Hundt, the lead author of the study.
Although the experiment may only occur in a virtual scenario, the future could be very different and lead to serious consequences.
The researchers expressed their concerns through the example of a security robot. They argued that if empowered, the robot could observe and amplify “bad” biases while performing its duties, leading to erroneous conclusions.
According to them, the ideal solution would be to program robots to refuse to make any predictions if the information is unavailable or inappropriate.