The sci-fi movie “Blade Runner” has constructed a future world that is difficult to distinguish between true and false: “replicants” made by humans are exactly the same as human beings. Unless an emotional test or a hidden model number is found, it is impossible to identify whether they are “replicants”.
This kind of future world may be a little far away, but if you only look at the face, you can’t tell the real and fake photos now.
AI “Fake Face” Crosses the Uncanny Valley
Look at a set of pictures first, do you have the confidence to recognize the “fake face” at a glance? Which of the following faces is fake? (See the end of the article for the answer, but don’t worry, there are many more faces to come.)
“Fake face” refers to AI-synthesized faces, not real photos. If you find it difficult to judge true and false, don’t worry, you are not the only one. Professor Hany Farid of the University of California, Berkeley, has been engaged in AI image synthesis technology for many years, and a recent study published in the Proceedings of the National Academy of Sciences shows that AI-synthesized faces are no different from real people, and even look more trustworthy than real people.
The results of the study were unexpected. Dr. Sophie Nightingale, who participated in the research, said that the purpose of the research at the beginning was to find a way to improve the credibility of AI faces by comparing with real people. Farid believes that the development and progress of AI image synthesis technology is very fast, faster than traditional CG imaging.
We believe that the Uncanny Valley effect of static faces has been crossed.
The Uncanny Valley Effect is a psychological theory. People will have a favorable impression of objects similar to people, but when the similarity reaches a certain level (such as zombies, dolls), people’s reactions will turn to extremely negative and disgusting. When the similarity rises again and gets closer to the real person, the emotional response of the person will return to a positive one, and there may be an empathy effect.
Judging from the results of Farid’s experiments, AI synthetic faces are likely to have left the “Walking Dead” stage. How is such a realistic human face synthesized? Generative Adversarial Networks (GANs) are currently the most mainstream algorithms. The name sounds difficult, but the logic is not complicated.
In short, GAN has “painters” and “appraisers”. The painter needs to draw a picture that looks like a real person as much as possible and give it to the appraiser for judgment. The appraiser has seen a lot of real-life photos before making a judgment, and analyzed the facial features. When the painter’s painting can fool countless appraisers who read faces, the perfect AI synthetic face photo was born. After continuous learning, the appraiser’s accuracy will improve, and the painter’s skills will also increase. The two form a confrontational relationship and improve the quality of the composite image until the fake is confused with the real.
The Farid experiment uses the Huida Nivdia StyleGAN2 model. In order to study the reliability of synthetic photos, the researchers conducted three experiments.
For the first time, 315 participants were invited to distinguish 128 groups (a total of 800 groups) of real photos and AI composite photos. As a result, the average correct rate of participants was less than 50%, only 48.2%. The second time, 219 new participants who were trained were asked to take the same test, and they were told whether they were correct or not after each answer.
The researchers explained that the accuracy rate of the second experiment improved, but only slightly more than 50%, reaching 59.0%. Farid and Nightingale were not surprised by the realism of the AI composite photos, but the results of the third test were unexpected.
For the third time, 223 new participants rated the reliability of the same batch of photos from 1 to 7. The results showed that the reliability of AI-synthesized photos was 7.7% higher than that of real photos. This small gap is significant for statistics. Significance.
Researchers believe that AI composite photos with certain credibility are likely to be used by criminals, defraud or cause confusion on social networks, and need to be paid attention to by the society, and the development of composite photo technology needs to be restrained. The question is, since AI synthetic face has certain risks, why do people still invest in research?
The AI face is very good, but the “double-sided blade”
At the 2019 E3 video game exhibition, Keanu Reeves made a surprise appearance at the “Electric Rider 2077” event, which instantly detonated the audience’s emotions.
Because in the virtual world, lifelike faces can give players a strong sense of immersion, and the game machine can be improved. Real face models instead of digital face pinching have become a means for more and more game manufacturers to shape their characters. However, the use of real face models represents high portrait licensing fees and motion capture costs, which are unaffordable for small studios. At this time, copyright-free AI synthesis of human faces can come in handy-playing virtual characters by non-existent people. Sounds too reasonable.
Generated Photo, a free AI composite photo project, has partnered with animation software company Reallusion to use AI-synthesized portraits to create 3D images for animation, games or advertising. Developers can freely choose race, age, and gender, and there will be no copyright issues. Imagine that NPCs such as The Sims or GTA have real faces, and the sense of immersion and presence will be greatly improved.
In addition to games, customer service software also requires a large number of real avatars to communicate with customers. If you replace real avatars with AI avatars, you can avoid copyright disputes and protect personal privacy. Although AI composite photos have reasonable significance, they will also have an impact on the authenticity of online photos. After all, no one wants to be courteous to people who do not exist when using a dating app.
Farid believes that the only way to solve this problem is to add a “authenticity” certification to each photo of a real person, so that when others view and use the photo, the authenticity can be recognized. It sounds like a reverse version of “Blade Runner”, where humans engrave numbers on replica eyeballs, while reality labels real-life photos to fight “fakes”. At present, Adobe, Microsoft, etc. have been promoting related technologies.
In February 2021, Adobe, Microsoft, Intel, Arm, and Truepic jointly established the Content Origin and Authenticity Alliance (C2PA) to combat fake information and establish technical standards that can verify the authenticity and traceability of images. The verification method is also very straightforward—information such as photo shooting, post-production or retouching is kept intact on the blockchain, and no matter how the photo is changed, the original file information can be directly viewed.
There will be an additional “i” in the upper right corner of the certified real photo. When you click it, you can see the detailed information such as the shooting date, location, and lens generated by the camera. If someone has used Photoshop and other software to modify the photo, you can also Return to view the original image.
C2PA certification technology can ensure the authenticity of photos in official fields such as news to a certain extent, but due to the short establishment time, only some media or social platforms currently use it, and it is too early to provide authenticity guarantee for all content on the Internet. That is to say, for a period of time, AI composite photos may be a social security concern, because image synthesis models such as Nivdia StyleGAN2 can be downloaded on open source platforms such as Github. Is it really safe to do so? Farid believes that this requires careful consideration by technical personnel after balancing the benefits and risks.
The question is, there are so many faces in this article, which ones are real people?
The answer is: Except for those with special labels, they are all fake.