A plethora of artificial intelligence (AI) tools is contributing to an overwhelming spread of fake images on social media, posing a risk of misinformation. Here are some expert tips for identifying which images are real and which are fake.
Widespread Fake Images of Celebrities
Creating images that look astonishingly real but are actually fake has never been easier. Anyone with an internet connection and access to AI tools can generate realistic-looking photos within seconds and then share them on social media at lightning speed.
AI-generated fake image depicting Elon Musk “dating” GM CEO Mary Barra.
Recently, many of these images have gone viral: such as the fabricated arrests of former President Donald Trump and President Vladimir Putin, or billionaire Elon Musk “dating” Mary Barra, the CEO of General Motors (GM).
The issue is that these AI-generated images depict events that never occurred. According to experts, while some of these images may be amusing and appear unrealistic, they still pose real dangers in terms of misinformation, even the risk of spreading fake news.
Images regarding the arrest of politicians like former U.S. President Donald Trump can often be quickly verified by users if they check reputable media sources. However, AI expert Henry Ajder told DW that other images are more challenging to recognize as fake, especially those involving less famous individuals.
The Dangers of Fake Event Images
According to Ajder, it’s not only AI-generated images of people that can spread misinformation. He pointed out instances where users have created images of events that never happened, such as a supposed severe earthquake that was said to have shaken the U.S. and Canada in 2001.
However, this earthquake never took place, and the images shared on Reddit were all created by AI. And according to Ajder, this is the crux of the problem. He explains, “If AI creates a landscape scene, it may be harder to detect.”
Nonetheless, AI tools still make mistakes, even as they develop rapidly. As of April 2023, programs like Midjourney, Dall-E, and DeepAI are still glitching, particularly with images depicting humans.
Here are some tips for verifying AI-generated images, although experts warn that these only reflect the current situation, as AI tools are evolving daily, even hourly:
Zoom In and Find the Source of the Image
Many AI-generated images may initially appear real. This is why the first suggestion from experts is to closely examine the photo. To do this, look for images in the highest resolution possible and then zoom in on the details.
Zooming in on the image will reveal inconsistencies and errors that may not be apparent at first glance.
If you’re unsure whether an image is real or AI-generated, try to find its source. You can often gather information about where the image was first posted by reading comments from other users below the image.
Alternatively, you can perform a reverse image search. To do this, upload the image to tools like Google Image Reverse Search, TinEye, or Yandex, and you may find the original source of the image.
The results of these searches can also provide links to fact-checking performed by reputable media outlets that offer additional context.
Pay Attention to Body Proportions and Display Errors
It is not uncommon for AI-generated images to have proportional discrepancies. Hands may be too small, or fingers too long. Or the head and feet may not align with the rest of the body.
Hands are now a major source of errors in AI image programs like Midjourney or DALL-E. People often have too many or too few fingers, as seen in the fake images of Pope Francis.
AI-generated fake image of Pope Francis.
Other common errors in AI-generated images include individuals having too many teeth, oddly distorted glasses, or ears with unrealistic shapes. Reflective surfaces, such as helmet visors, also pose challenges for AI programs.
However, AI expert Henry Ajder warns that newer versions of programs like Midjourney are getting better at generating hands, meaning users may not be able to rely on spotting these types of errors for long.
Do the Images Look Fake and Smooth?
Particularly, the Midjourney application generates many beautiful and unrealistically perfect images, prompting viewers to question their authenticity. Andreas Dengel from the German AI research center states, “The faces are too pristine, the fabrics are depicted too harmoniously.”
People’s skin in many AI images often looks smooth and flawless, with even their hair and teeth appearing perfect. This is impossible in real life.
Many images are also artistic, glossy, and sparkling, which even professional photographers find challenging to achieve in a studio setting.
AI tools seem to consistently design idealized images meant to be as perfect and pleasing as possible.
Check the Context and Conclude
The background of an image can often reveal whether it has been faked. Objects may appear distorted, such as streetlights. In some cases, AI programs replicate people and objects and use them repeatedly. It’s not uncommon for the backgrounds of AI images to be blurred.
But even this blurriness can contain errors. For instance, in a case depicting an angry Will Smith at the Oscars, the background not only lost clarity but also appeared artificially blurred.
- Many AI-generated images can currently be detected with just a few small considerations. AI detection tools like Hugging Face can also help you identify forgeries. However, as mentioned, technology is improving, and errors in AI images may become less frequent, making them harder to detect.
- Meta launches new AI model capable of recognizing details in images
- Artificial intelligence integrated with mouse olfaction
- Technology enables conversations with characters like real people