In an increasingly sophisticated AI era, distinguishing between real images and AI-generated photos is becoming more challenging than ever.
A new study from Microsoft's AI for Good laboratory shows that the ability of humans to accurately identify AI-generated photos is only 62%, just slightly higher than the 50% chance of ghosting like when launching coins.
This shows that we actually have almost no clear advantage when trying to distinguish between real and fake AI-generated photos.
The data was collected from the online game "Real or Not Quiz", where more than 12,500 participants worldwide were asked to determine which photos were real and which were fake AI photos, with a total of about 287,000 images analyzed.
The images in the game are created by the most advanced image AI tools today such as Midjourney, DALL·E, Stable differenceusion... to ensure the challenge for players.
The results show that humans have a success rate of about 65% when distinguishing between real portraits and AI-generated people, but this rate is reduced to only 59% for natural or urban landscape images.
The reason is said to be that the human brain tends to recognize faces better, while being easily deceived by landscape images without clear characteristics.
Researchers note that AI is increasingly creating realistic images to the point of "no unique style", making it more difficult to distinguish.
This is completely consistent with a recent study from the University of Surrey, which shows that the human brain is attracted to facial recognition, a factor that makes the rate of recognition of people higher than that of landscape.
Although it has some advantages over humans, current AI photo detection tools are not yet perfect. They can detect more accurately but are also more likely to make mistakes in many cases.
Therefore, Microsoft's research team is developing a new AI image detection tool with an accuracy of up to 95%, while emphasizing the importance of labeling, attaching blurred and transparent images to prevent the spread of misinformation caused by AI images.
In the context of the increasing popularity of fake images, alertness and strong detection technology will be the last barrier to protect public confidence in the volatile digital world.