“It’s photoshopped!” is a line that is often heard regarding retouched images. However, nobody still expects to say the same about a video, since the editing of multiple frame requires not only a lot of time and hard work, but good rendering, to make the result believable.
Stanford, Max Planck and Erlangen-Nuremberg researchers have presented an IEEE paper for a system called Face2Face, which does exactly that: it edits the video in real time, and the facial gestures of the target person become yours with ease, which includes the possibility that it could appear that the target person in the video says something he or she did not say.
Their goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. You take a YouTube video of someone speaking and use a standard webcam to capture a video of someone else – say Putin – producing different facial expressions or saying something entirely different. Both videos are processed by the Face2Face system, and the reenactment is achieved by fast and efficient deformation transfer between source and the target. From the target sequence the best-match mouth interior is retrieved and warped to produced an accurate fit, and the synthesized target face is re-render on top of the corresponding video stream, so that it seamlessly blends with the real-world illumination. Now the end result shows a believable video of Putin saying something else and having different facial expressions.
Even though it feels a bit uncanny, this leads to a bigger question, since videos will eventually suffer the same fate as pictures and seem less believable and be more easily considered fake. However, it also leads to new progress in computer vision regarding live video editing and rendering, and since this is work in progress, there are more results to come.
- Thies, Justus et al. (2016): “Face2Face: Real-time Face Capture and Reenactment of RGB Videos“, Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, June 2016