Artificial intelligence (A.I.) tools have the ability to create incredibly lifelike images of people who do not actually exist. Public tools such as Dall-E and Midjourney have raised questions about whether photos are real or A.I.-generated, with some A.I.-generated faces being perceived as more realistic than actual photographs of people. These A.I. systems have been trained on thousands of images of real people, resulting in hyper-realistic faces. However, this phenomenon is more prominent in images of white people, as the A.I. training datasets mostly contain images of white individuals.
A study found that participants had difficulty distinguishing between real faces and A.I.-generated faces, and higher confidence in their selections correlated with a higher chance of being wrong. This suggests that individuals may be more vulnerable to misinformation online, particularly when it comes to hyper-realistic A.I.-generated faces. It is concerning for experts who fear that these digital fakes could contribute to the spread of false and misleading messages.
A.I. systems have previously struggled to create entirely realistic faces, with telltale signs that the images were not real, such as mismatched ears or eyes that did not look in the same direction. However, as these systems have advanced, the tools have become better at creating faces. Hyper-realistic faces used in the studies were found to be less distinctive and closely aligned with average proportions, making it difficult for participants to identify them as A.I.-generated.
The study also revealed that participants relied on certain features to make their decisions, including the proportional appearance of facial features, skin, and wrinkles. The images in the study were generated by an A.I. image model trained on a public repository of photographs, with 69 percent of the faces being white.
As A.I. technology continues to advance, the ability to distinguish between real and A.I.-generated faces becomes increasingly challenging, highlighting the potential for misuse and misinformation online.
Source
Photo credit www.nytimes.com