Find UFOs, The Apocalypse, New World Order, Political Analysis,
Alternative Health, Armageddon, Conspiracies, Prophecies,
Spirituality, Home Schooling, Home Mortgages and more, in:
Rumor Mill News Reading Room, Current Archive
“A Tsunami of Fake Sh*t”: Joe Rogan Sounds Alarm on Deep Fake Videos
“They can make video of you saying things from a single photograph”
Paul Joseph Watson | Infowars.com - May 28, 2019
Podcaster Joe Rogan has sounded the alarm on deep fake videos after researchers at Samsung AI were able to professionally create talking heads out of people in old black and white photos and even paintings.
Working at the Skolkovo Institute of Science and Technology, the scientists have been able to simplify “realistic neural talking head models” which normally require a huge dataset of images to look genuine.
The researchers created life-like talking heads with just a few images of a person and even in some cases just a single image.
“Here, we present a system with such few-shot capability,” write the scientists. “It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”
The video example of the study shows a painting of the Mona Lisa talking animatedly. A photo of Albert Einstein is also brought to life.
“They can make video of you saying things from a single photograph,” tweeted Joe Rogan after watching the video. “I feel like the water just dramatically pulled back from the shore and we’re about to experience a tsunami of fake shit.”
They can make video of you saying things from a single photograph. I feel like the water just dramatically pulled back from the shore and we’re about to experience a tsunami of fake shit. https://t.co/oIy70xSPrI
— Joe Rogan (@joerogan) May 27, 2019
Numerous people have warned that bad actors could exploit deep fake technology to frame people for doing and saying things that never actually happened.
Terrorists or rogue states could also use the technology to fake world leaders making hoax statements that could be used for propaganda or even to start wars.
However, the scientists behind the project argue that the technology will have positive purposes.
“It will lead to a reduction in long-distance travel and short-distance commute,” writes Egor Zakharov. “It will democratize education, and improve the quality of life for people with disabilities. It will distribute jobs more fairly and uniformly around the World. It will better connect relatives and friends separated by distance.”
Zakharov says that in the future, people will be represented by “realistic semblances of themselves” and that concerns over “deep fakes” are overblown because “Hollywood has been making fake videos (aka “special effects”) for a century” and hoaxes are easily detected.
Mark Zuckerberg hates it when you share this article on Facebook.
Statement regarding the purpose and effect of the technology
(NB: this statement reflects personal opinions of the authors and not of their organizations)
We believe that telepresence technologies in AR, VR and other media are to transform the world in the not-so-distant future. Shifting a part of human life-like communication to the virtual and augmented worlds will have several positive effects. It will lead to a reduction in long-distance travel and short-distance commute. It will democratize education, and improve the quality of life for people with disabilities. It will distribute jobs more fairly and uniformly around the World. It will better connect relatives and friends separated by distance. To achieve all these effects, we need to make human communication in AR and VR as realistic and compelling as possible, and the creation of photorealistic avatars is one (small) step towards this future. In other words, in future telepresence systems, people will need to be represented by the realistic semblances of themselves, and creating such avatars should be easy for the users. This application and scientific curiosity is what drives the research in our group, including the project presented in this video.
We realize that our technology can have a negative use for the so-called “deepfake” videos. However, it is important to realize, that Hollywood has been making fake videos (aka “special effects”) for a century, and deep networks with similar capabilities have been available for the past several years (see links in the paper). Our work (and quite a few parallel works) will lead to the democratization of the certain special effects technologies. And the democratization of the technologies has always had negative effects. Democratizing sound editing tools lead to the rise of pranksters and fake audios, democratizing video recording lead to the appearance of footage taken without consent. In each of the past cases, the net effect of democratization on the World has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different. Our belief is supported by the ongoing development of tools for fake video detection and face spoof detection alongside with the ongoing shift for privacy and data security in major IT companies.
Authors:
Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, Victor Lempitsky