The “Die Reporter” team of German television station WDR made a report about the realtime deepfakes implementation. You can view the whole report on YouTube.
On WDR the story was also broadcasted on “Aktuelle Stunde” on 1st of June 2020. You can watch the report for some days following this link to the WDR Mediathek.
Deep fakes are fake images or videos generated by Artificial Intelligence. The media produced by deep neural networks can seem deceptively real. In the most common case, faces are exchanged, creating the illusion of seeing another person in a video which wasn’t orignally there.
Software developers of the IT consulting company TNG Technology Consulting GmbH based in Unterföhring near Munich have succeeded in developing a software that enables deep fakes to be generated in realtime. This makes it possible to distort live images from cameras in such a way that the face in front of the camera is swapped with that of another person.
The facial expressions of a person standing in front of a camera are recognized and displayed on the faces of known persons, such as, for example, Angela Merkel, Barack Obama or Elon Musk, broadcast. In the final video stream, the software first removes the input face completely, and then replaces it with the Deep Fake. The prototype presented by TNG is a kind of mirror, in which one recognizes his facial expressions, but not his face.
The basis for this is Artificial Intelligence. Through the use of various computer vision and neural network techniques, faces are recognized in the video input, translated and integrated back into the video output. Through this technique, it is possible to project deceptively real imitations on other people.
Auto-encoder networks were trained by means of so-called GANs (Generative Adversarial Networks). In addition, the researchers used various other neural networks for facial recognition and facial segmentation.
We are working hard on new showcases and talks about Deep Learning and neural networks. This is the reason why there aren’t many new posts on this page.
Please enjoy our “elevator pitch” about our new Stereoscopic Realtime Style Transfer showcase based on the Dell Visor Mixed Reality headset. We were able to train our neural network with the style of the music video from A-ha “Take on me”.
Furthermore we wrote a technical article for JavaPRO. You can order an issue for free at https://magazin.java-pro.de/
FireP4j is a library for the JVM which allows to log to the FireBug console. Doing so, you can see the log output within the browser. You don't have to mess up your HTML code anymore.