Deepfake
Deepfake (weapon, technology, statecraft) | |
---|---|
Thispersondoesnotexist.com made these pictures. Imagine if video's would also look completely normal. | |
Interest of | • Munira Mustaffa • Operation Blackout • Matthijs Veenendaal |
A futuristic and dystopian technology to impersonate...anyone in a video, audio, picture or combined. Use in statecraft is soon to be expected. |
Deepfakes (a portmanteau of "deep learning" and "fake") are generated media - pictures and/or videos - which may seem exactly similar but are made by a person. In addition, the dystopian and eerie idea that nothing a person could see can be trusted any more[1] was[2] pushed as a dreadful new possibility by the traditional peddlers of lies (corporate media and intelligence services) after a surge of pornographic images starting in South Korea in 2018.[3]
Contents
History
Photo manipulation was developed in the 19th century and soon applied to motion pictures. Technology steadily improved during the 20th century, and more quickly with digital video.
Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities.[4][5] More recently the methods have been adopted by industry.[6]
Potential
While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as auto encoders or generative adversarial networks (GANs). Deepfakes have garnered widespread attention for their uses in celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.
Academic research
Academic research related to deepfakes lies predominantly within the field of computer vision, a subfield of computer science. An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track. It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.
Contemporary academic projects have focused on creating more realistic videos and on improving techniques. The “Synthesizing Obama” program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as a main research contribution its photorealistic technique for synthesizing mouth shapes from audio. The Face2Face program, published in 2016, modifies video footage of a person's face to depict them mimicking the facial expressions of another person in real time. The project lists as a main research contribution the first method for re-enacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.
In August 2018, researchers at the University of California, Berkeley published a paper introducing a fake dancing app that can create the impression of masterful dancing ability using AI.[7][8] This project expands the application of deepfakes to the entire body; previous works focused on the head or parts of the face.
Researchers have also shown that deepfakes are expanding into other domains such as tampering medical imagery.[9] In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient's 3D CT scan. The result were so convincing that it fooled three radiologists and a state-of-the art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in a white hat penetration test.
A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years.[10] The survey identifies that researcher have been focusing on resolving the following challenges of deepfake creation:
- Generalization. High quality deepfakes are often achieved by training on hours of footage of the target. This challenge is to minimize of the amount of training data required to produce quality images, and to enable the execution of trained models on new identities (unseen during training).
- Paired Training. Training a supervised model can produce high quality results, but requires data pairing. This is the process of finding examples of inputs and their desired outputs for the model to learn from. Data pairing is laborious and impractical when training on multiple identities and facial behaviors. Some solutions include self-supervised training (using frames from the same video), the use unpaired networks such as Cycle-GAN, or the manipulation of network embeddings.
- Identity Leakage. This is where the identity of the driver (i.e., the actor controlling the face in a reenactment) is partially transferred to the generated face. Some solutions proposed include attention mechanisms, few-shot learning, disentanglement, boundary conversions, and skip connections.
- Occlusions. When part of the face is obstructed with a hand, hair, glasses, or any other item then artifacts can occur. A common occlusion is a closed mouth which hides the inside of the mouth and the teeth. Some solutions include image segmentation during training and in-painting.
- Temporal Coherence. In videos containing deepfakes, artifact such as flickering and jitter can occur because the network has no context of the preceding frames. Some researchers provide this context or use novel temporal coherence losses to help improve the realism.
Amateur development
The term deepfakes originated around the end of 2017 from a Reddit user named "deepfakes".[11] He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities’ faces swapped onto the bodies of actresses in pornographic videos, while non-pornographic content included many videos with actor Nicolas Cage’s face swapped into various movies.[12]
Other online communities remain, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for "safe for work deepfakes"), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios.[13] Other online communities continue to share pornography on platforms that have not banned deepfake pornography.[14]
Commercial development
In January 2018, a proprietary desktop application called FakeApp was launched.[15] This app allows users to easily create and share videos with their faces swapped with each other.[16] As of 2019, FakeApp has been superseded by open-source alternatives such as Faceswap and the command line-based DeepFaceLab.[17][18]
Larger companies are also starting to use deepfakes. The mobile app giant Momo created the application Zao which allows users to superimpose their face on TV and movie clips with a single picture. The Japanese AI company DataGrid made a full body deepfake that can create a person from scratch.[19] They intend to use these for fashion and apparel.
Audio deepfakes, and AI software capable of detecting deepfakes and cloning human voices after 5 seconds of listening time also exist.[20][21][22][23][24]
A mobile deepfake app, Impressions, was launched in March of 2020. It was the first app for the creation of celebrity deepfake videos from mobile phones.[25][26]
Techniques
Deepfakes rely on a type of neural network called an autoencoder.[27] These consist of an encoder, which reduces an image to a lower dimensional latent space, and a decoder, which reconstructs the image from the latent representation. Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space.[28] The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space.
A popular upgrade to this architecture attaches a generative adversarial network to the decoder. A GAN trains a generator, in this case the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated. This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator.[29] Both algorithms improve constantly in a zero sum game. This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.<
Applications
Gabon
Gabon was on the first actual countries where deepfakes instigated a coup in 2019.[30]
Pornography
Many deepfakes on the internet feature pornography of people, often female celebrities whose likeness is typically used without their consent.[31] Deepfake pornography prominently surfaced on the Internet in 2017, particularly on Reddit.[32] The first one that captured attention was the Daisy Ridley deepfake, which was featured in several articles. Other prominent pornographic deepfakes were of various other celebrities.[33][34][35] As of October 2019, most of the deepfake subjects on the internet were British and American Actresses. However, around a quarter of the subjects are South Korean, the majority of which are K-pop stars.[36]
Politics
Deepfakes have been used to misrepresent well-known politicians in videos.
- In separate videos, the face of the Argentine President Mauricio Macri has been replaced by the face of Adolf Hitler, and Angela Merkel's face has been replaced with Donald Trump's.[37][38]
- In April 2018, Jordan Peele collaborated with Buzzfeed to create a deepfake of Barack Obama with Peele's voice; it served as a public service announcement to increase awareness of deepfakes.[39]
- In January 2019, Fox affiliate KCPQ aired a deepfake of Trump during his Oval Office address, mocking his appearance and skin color (and subsequently fired an employee found responsible for the video).[40]
- During the 2020 Delhi Legislative Assembly election campaign, the Delhi Bharatiya Janata Party used similar technology to distribute a version of an English-language campaign advertisement by its leader, Manoj Tiwari, translated into Haryanvi to target Haryana voters. A voiceover was provided by an actor, and AI trained using video of Tiwari speeches was used to lip-sync the video to the new voiceover. A party staff member described it as a "positive" use of deepfake technology, which allowed them to "convincingly approach the target audience even if the candidate didn't speak the language of the voter."[41]
- In April 2020, the Belgian branch of Extinction Rebellion published a deepfake video of Belgian Prime Minister Sophie Wilmès on Facebook.[42] The video promoted a possible link between deforestation and COVID-19. It had more than 100,000 views within 24 hours and received many comments. On the Facebook page where the video appeared, many users interpreted the deepfake video as genuine.[43]
In June 2019, the United States House Intelligence Committee held hearings on the potential malicious use of deepfakes to sway elections.[44]
Social media
Deepfakes have begun to see use in popular social media platforms, notably through Zao, a Chinese deepfake app that allows users to substitute their own faces onto those of characters in scenes from films and television shows such as Romeo + Juliet and Game of Thrones.[45] The app originally faced scrutiny over its invasive user data and privacy policy, after which the company put out a statement claiming it would revise the policy.[46] in January 2020 Facebook announced that it was introducing new measures to counter this on its platforms.[47]
Concerns
Fraud
Audio deepfakes have been used as part of social engineering scams, fooling people into thinking they are receiving instructions from a trusted individual.[48] In 2019, a U.K.-based energy firm's CEO was scammed over the phone when he was ordered to transfer €220,000 into a Hungarian bank account by an individual who used audio deepfake technology to impersonate the voice of the firm's parent company's chief executive.[49]
Credibility and authenticity
Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not. AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism. The primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth.
Similarly, computer science associate professor Hao Li of the University of Southern California states that deepfakes created for malicious use, such as fake news, will be even more harmful if nothing is done to spread awareness of deepfake technology.[50] Li predicts that genuine videos and deepfakes will become indistinguishable by October 2019, due to rapid advancement in artificial intelligence and computer graphics.
Responses
Detection
Most of the academic research surrounding Deepfake seeks to detect the videos.[51] The most popular technique is to use algorithms similar to the ones used to build the deepfake to detect them. By recognizing patterns in how Deepfakes are created the algorithm is able to pick up subtle inconsistencies.Researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting. This technique has also been criticized for creating a "Moving Goal post" where anytime the algorithms for detecting get better, so do the Deepfakes. The Deepfake Detection Challenge, hosted by a coalition of leading tech companies, hope to accelerate the technology for identifying manipulated content.[52]
Other techniques use Blockchain to verify the source of the media.[53] Videos will have to be verified through the ledger before they are shown on social media platforms. With this technology, only videos from trusted sources would be approved, decreasing the spread of possibly harmful Deepfake media.
Internet reaction
Facebook has previously stated that they would not remove deepfakes from their platforms.[54] The videos will instead be flagged as fake by third-parties ('fact checkers') and then have a lessened priority in user's feeds.[55] This response was prompted in June 2019 after a deepfake featuring a 2016 video of Mark Zuckerberg circulated on Facebook and Instagram.
Legal response
In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to the US Senate,[56] and in 2019 the DEEPFAKES Accountability Act was introduced in the House of Representatives.[57] Several states have also introduced legislation regarding deepfakes, including Virginia,[58] Texas, California, and New York.[59] On October 3, 2019, California governor Gavin Newsom signed into law Assembly Bills No. 602 and No. 730.[60][61] Assembly Bill No. 602 provides individuals targeted by sexually explicit deepfake content made without their consent with a cause of action against the content's creator. Assembly Bill No. 730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.
In November 2019 China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered a crime the Cyberspace Administration of China stated on its website.[62] The Chinese government seems to be reserving the right to prosecute both users and online video platforms failing to abide by the rules.
In the United Kingdom, producers of deepfake material can be prosecuted for harassment, but there are calls to make deepfake a specific crime;[63] in the United States, where charges as varied as identity theft, cyberstalking, and revenge porn have been pursued, the notion of a more comprehensive statute has also been discussed.[64]
In Canada, the Communications Security Establishment released a report which said that deepfakes could be used to interfere in Canadian politics, particularly to discredit politicians and influence voters.[65][66] There are multiple ways for citizens in Canada to deal with deepfakes if they are targeted by them.[67]
In popular culture
- Rising Sun. The 1993 film Rising Sun starring Sean Connery and Wesley Snipes depicts another character, Jingo Asakuma, who reveals that a computer disc has digitally altered personal identities to implicate a competitor.
- The Capture. Deepfake technology is part of the plot of the 2019 BBC One drama The Capture. The series follows British ex-soldier Shaun Emery, who is accused of assaulting and abducting his barrister. Expertly doctored CCTV footage is used to set him up and mislead the police investigating him.[68][69]
References
- ↑ https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html
- ↑ https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
- ↑ https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
- ↑ https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
- ↑ https://www.nbcnews.com/think/opinion/thanks-ai-future-fake-news-may-be-easily-faked-video-ncna845726
- ↑ https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
- ↑ https://www.businessinsider.com.au/artificial-intelligence-ai-deepfake-dancing-2018-8
- ↑ https://www.theverge.com/2018/8/26/17778792/deepfakes-video-dancing-ai-synthesis
- ↑ https://www.usenix.org/conference/usenixsecurity19/presentation/mirsky
- ↑ http://arxiv.org/abs/2004.11138
- ↑ https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley
- ↑ https://mashable.com/2018/01/31/nicolas-cage-face-swapping-deepfakes/
- ↑ https://www.reddit.com/r/SFWdeepfakes/%7Ctitle=r/SFWdeepfakes
- ↑ https://www.dailydot.com/unclick/deepfake-sites-reddit-ban/
- ↑ https://www.online-tech-tips.com/computer-tips/what-is-a-deepfake-and-how-are-they-made/
- ↑ https://www.theverge.com/2018/2/11/16992986/fakeapp-deepfakes-ai-face-swapping%7Ctitle=I'm using AI to face-swap Elon Musk and Jeff Bezos, and I'm really bad at it
- ↑ https://faceswap.dev
- ↑ https://github.com/iperov/DeepFaceLab
- ↑ https://www.fastcompany.com/90407145/youve-been-warned-full-body-deepfakes-are-the-next-step-in-ai-based-human-mimicry
- ↑ https://www.theverge.com/2020/1/29/21080553/ftc-deepfakes-audio-cloning-joe-rogan-phone-scams
- ↑ https://google.github.io/tacotron/publications/speaker_adaptation/
- ↑ http://www.niessnerlab.org/projects/roessler2019faceforensicspp.html
- ↑ https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge
- ↑ http://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
- ↑ https://www.dailydot.com/debug/impressions-deepfake-app///
- ↑ https://kool1079.com/fun-or-fear-deepfake-app-puts-celebrity-faces-in-your-selfies//
- ↑ https://www.alanzucconi.com/2018/03/14/understanding-the-technology-behind-deepfakes/
- ↑ https://towardsdatascience.com/what-the-heck-are-vae-gans-17b86023588a
- ↑ https://www.wired.com/story/these-new-tricks-can-outsmart-deepfake-videosfor-now/%7Ctitle=These New Tricks Can Outsmart Deepfake Videos—for Now
- ↑ https://www.motherjones.com/politics/2019/03/deepfake-gabon-ali-bongo/
- ↑ https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
- ↑ https://variety.com/2018/digital/news/deepfakes-porn-adult-industry-1202705749/
- ↑ https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6
- ↑ https://www.bbc.com/news/technology-42912529
- ↑ https://www.vice.com/en_us/article/ywe4qw/gfycat-spotting-deepfakes-fake-ai-porn
- ↑ https://medium.com/@frenizoe/deepfake-porn-efb80f39bae3
- ↑ https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720
- ↑ http://faktenfinder.tagesschau.de/hintergrund/deep-fakes-101.html
- ↑ https://www.vox.com/2018/4/18/17252410/jordan-peele-obama-deepfake-buzzfeed
- ↑ https://www.washingtonpost.com/nation/2019/01/11/seattle-tv-station-aired-doctored-footage-trumps-oval-office-speech-employee-has-been-fired/
- ↑ https://www.vice.com/en_in/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp
- ↑ https://www.extinctionrebellion.be/en/
- ↑ https://journalism.design/les-deepfakes/extinction-rebellion-sempare-des-deepfakes/%7Ctitle=Extinction Rebellion s'empare des deepfakes
- ↑ https://www.cnn.com/2019/06/04/politics/house-intelligence-committee-deepfakes-threats-hearing/index.html
- ↑ https://www.forbes.com/sites/jessedamiani/2019/09/03/chinese-deepfake-app-zao-goes-viral-faces-immediate-criticism-over-user-data-and-security-policy/
- ↑ https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
- ↑ https://www.independent.ie/business/technology/ahead-of-irish-and-us-elections-facebook-announces-new-measures-against-deepfake-videos-38840513.html
- ↑ https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audio-phone-calls-thieves-trick-companies-stealing-money
- ↑ https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
- ↑ https://www.wbur.org/hereandnow/2019/10/02/deepfake-technology
- ↑ https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/
- ↑ https://deepfakedetectionchallenge.ai/
- ↑ https://www.wired.com/story/the-blockchain-solution-to-our-deepfake-problems/
- ↑ https://www.technologyreview.com/f/613690/facebook-deepfake-zuckerberg-instagram-social-media-election-video/%7Ctitle=Facebook has promised to leave up a deepfake video of Mark Zuckerberg
- ↑ https://www.vice.com/en_us/article/ywyxex/deepfake-of-mark-zuckerberg-facebook-fake-video-policy
- ↑ https://www.congress.gov/bill/115th-congress/senate-bill/3805
- ↑ https://www.congress.gov/bill/116th-congress/house-bill/3230www.congress.gov
- ↑ http://social.techcrunch.com/2019/07/01/deepfake-revenge-porn-is-now-illegal-in-virginia/
- ↑ https://slate.com/technology/2019/07/congress-deepfake-regulation-230-2020.html
- ↑ https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602
- ↑ https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730
- ↑ https://www.reuters.com/article/us-china-technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VU
- ↑ https://www.theguardian.com/world/2018/jun/21/call-for-upskirting-bill-to-include-deepfake-pornography-ban
- ↑ https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target
- ↑ https://cyber.gc.ca/sites/default/files/publications/tdp-2019-report_e.pdf see page 18
- ↑ https://election.ctvnews.ca/how-deepfakes-could-impact-the-2019-canadian-election-1.4586847
- ↑ https://mcmillan.ca/What-Can-The-Law-Do-About-Deepfake
- ↑ https://www.telegraph.co.uk/technology/2019/10/08/truth-behind-deepfake-video-bbc-ones-thriller-capture/ |accessdate=24 October 2019
- ↑ https://www.irishtimes.com/culture/tv-radio-web/the-capture-a-bbc-thriller-of-surveillance-distortion-and-duplicity-1.4008823
Related Quotation
Page | Quote | Author |
---|---|---|
Matthijs Veenendaal | “Trust is a key foundation of a well-functioning society. Without reliable communication, organizations cannot operate effciently, be they corporations or government institutions. Malicious actors are aiming to exploit vulnerabilities in communication flows. With the advent
of new technology, it is possible for adversaries to impersonate leaders and create false impressions among population. The Tallinn based NATO Cooperative Cyber Defence Centre of Excellence will organize a session focusing on questions including: What is at stake? What can nations do to enhance and protect trust in democratic institutions? Or is it already too late?” | Matthijs Veenendaal |
References
Wikipedia is not affiliated with Wikispooks. Original page source [ here]