Artificial intelligence can create such realistic human faces that people can’t distinguish them from real faces – and they actually trust the fake faces more.
Fictional, computer-generated human faces are so convincing they can fool even trained observers. They can be easily downloaded online and used for internet scams and fake social media profiles.
“We should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud, for example,” says Sophie Nightingale at Lancaster University in the UK.
AI programs called generative adversarial networks, or GANs, can learn to create fake images that are less and less distinguishable from real images, by pitting two neural networks against each other.
Nightingale and her colleague Hany Farid at the University of California, Berkeley, asked 315 participants, recruited on a crowdsourcing website, to say whether they could distinguish a selection of 400 fake photos from 400 photographs of real people. Each set consisted of 100 people from each of four ethnic groups: white, Black, East Asian and South Asian.
This group had an accuracy rate of 48.2 per cent – slightly worse than chance. A second group of 219 participants were given training to recognise computer-generated faces. This group had an accuracy rate of 59 per cent, but this difference is negligible, says Nightingale.
White faces were the hardest for people to distinguish between real and fake, perhaps because the synthesis software was trained on disproportionally more white faces.
The researchers also asked a separate group of 223 participants to rate a selection of the same faces on their level of trustworthiness, on a scale of 1 to 7. They rated the fake faces as 8 per cent more trustworthy, on average, than the real faces – a small yet significant difference, according to Nightingale. That might be because synthetic faces look more like “average” human faces, and people are more likely to trust typical-looking faces, she says.
Read more: AI can detect a deepfake face because its pupils have jagged edges
Looking at the extremes, the four faces rated most untrustworthy were real, whereas the three most trustworthy faces were fake.
“We need stricter ethical guidelines and more legal frameworks in place because, inevitably, there are going to be people out there who want to use [these images] to do harm, and that’s worrying,” says Nightingale.
To reduce these risks, developers could add watermarks to their images to flag them as fake, she says. “In my opinion, this is bad enough. It’s just going to get worse if we don’t do something to stop it.”
Journal reference: PNAS, DOI: 10.1073/pnas.2120481119
More on these topics: