Samsung AI Center’s Moscow research team have combined with Skolkovo Institute of Science and Technology to publish a new document that shows how terrifying new software can be used to create 3-dimensional talking heads. The images are created via still images. The software’s creations are nothing short of stunning and scary. It can take a regular, common image of anyone and produce what appears to be a video communication from that person. Of course, the implications of such technology are unsettling. This means rival governments will be capable of serving up a completely fake video of politicians. It also means countries such as the United States could leverage such technology for its own purposes.
Here’s a sample of the technology below.
The researchers published document can be found here. It claims that as little as one image is needed to create the life-like videos. Although, the software tends to work most accurately when given a series of images, rather than a single image. “Crucially, only a handful of photographs (as little as one) is needed to create a new model, whereas the model trained on 32 images achieves perfect realism and personalization score in our user study (for 224p static images).”
The good news is that these same researchers claim the software has difficulty creating accurate personalities. They claim “landmark adaption” would need to occur in order to achieve personality matches.
That said, personality matches may not be needed in every creation. And there is plenty of opportunities to sample thousands of clips of global politicians, including President Trump or any number of Democratic 2020 Presidential candidates.
Such software could change the results of elections, thrust societies into civil wars, and subvert activism.
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
Below is an example of how Samsung’s AI software uses a single image to create life-like portraits.
The results are clearly stunning.
These “photorealistic talking head models” are created via convolutional neural networks. The algorithm adjusts and learns to display numerous personality attributes. The program looks for hallmark personality features, such as mouth, eye, and nose movements.
The company claims its research is aimed at creating life-like avatars, improved video conferencing, and gaming effects. But it’s a long stretch to ignore the tremendous negative potential such software offers to society. With 5G wireless networks already on the brink of full-scale launch, technology is becoming sharper and faster. Which gives way to much more nefarious potential.
Fake Video Faces and Social Media
Most everyone has a social media account. Whether it’s Facebook, Twitter, Pinterest, or Reddit, people’s images are littered throughout the web. And that could serve as a feeding frenzy for those looking to cash in or manipulate via someone’s likeness. As you can see in the samples above, the fake video faces appear authentic and life-like. If such software ends up in the wrong hands, it’s clear the devastating potential it yields.
It’s unlikely that this software will sit dormant. Instead, it will evolve and improve to a point that we aren’t likely to be able to detect the difference between real and fake videos. Social media influencers, which includes politicians and celebrities, would be most at risk for fake video creation.
Author: Jim Satney
PrepForThat’s Editor and lead writer for political, survival, and weather categories.
Please visit the CDC website for the most up-to-date COVID-19 information.
*As an Amazon Associate I earn from qualifying purchases