In 2022 — deepfakes — synthetic photos, videos, text, or audio generated by artificial intelligence poses a potentially significant threat to governments, financial institutions, social media users, and consumers. Criminals could exploit the technology maliciously, impersonating political leaders, posing as executives to steal a company’s intellectual property, or by impersonating customers to commit fraud or extortion.
According to open-source research, the term “deepfake” originated on Reddit in 2017. A Reddit user created a channel on the platform to manipulate the footage in pornographic videos, swapping out faces with open-source technology.
Since 2017, deepfake technology has evolved, with viral video deepfakes of celebrities like Tom Cruise or politicians like former President Richard Nixon saying or doing things they have never done. Those videos, however, posed no threat and were not hard to distinguish as inauthentic.
In 2018, the people in Gabon — a small country in Central Africa — hadn’t heard or seen from President Ali Bongo for several months. The Gabonese government told the public that Bongo was recovering from a stroke. On New Year’s, a video of Bongo was released, but many viewers suspected something was off with the footage, calling it a deepfake. About a week after the video surfaced, some members of the military used the video as a pretext to launch a coup in Gabon, which failed. Further analysis didn’t even determine whether the video was, in fact, a deepfake.
Before the 2020 U.S. Presidential Election, lawmakers warned that deepfakes could be used to disrupt the race. Rep. Adam Schiff (D-CA) said that deepfakes have “the capacity to disrupt entire campaigns, including that for the presidency.” Schiff’s comments followed a viral video of House Speaker Nancy Pelosi (D-CA) allegedly slurring her speech. The video of Pelosi wasn’t developed with deepfake software but was footage slowed down to make her appear intoxicated. The video, which received over two million views on Facebook, had users on the platform questioning the House Speaker’s cognitive ability and health.
The Private Sector
In 2019, an energy company in the U.K. received a call from what they thought was the CEO of their German-based parent company to initiate a $243,000 wire transfer that would be reimbursed. The caller used deepfake audio technology to impersonate the executive of the German business, so there were no suspicions on the U.K. side. The money was sent, and the criminals were able to disburse the funds to another account.
In 2020, a manager at a bank in Hong Kong was duped by the same deepfake audio technology. In this case, the caller asked for 35 million USD because the company was moving forward with an acquisition. Seeing that email correspondence also lined up with the request, the manager made the transfer.
In addition, in 2020, Jacques Maurico Anderson, a North Carolina resident, pleaded guilty to conspiracy to commit bank fraud by using synthetic identities. According to the press release from the Department of Justice, Anderson purchased synthetic identities from a seller on Craigslist. Anderson exploited his new fictitious identities to trick lenders into extending him credit, falsify his income, and make multiple major purchases.
Deepfakes Targeting Social Media Users and Consumers
Through our research, analysts identified a Facebook user who said that they were a target of a deepfake for extortion. On February 22, 2022, the Facebook user posted, “Friends, Please be aware that I have been the victim of a DeepFake attack. Meaning I’m being blackmailed/extorted to give this hacker money, or he/she will release a very disturbing video with my face and background of someone doing an obscene act. I ASSURE YOU THIS IS NOT ME. If you receive a link on one of your posts or a message, please delete and block the sender.”
We traced the poster’s digital footprint and discovered that he works in the insurance industry, suggesting that criminals use deepfakes to extort more than CEOs and executives.
In November 2021, Daniel Higgins, a Florida resident, couldn’t log into his Instagram account as his password had been changed, which he didn’t authorize. When he went to his Instagram profile, he found a fake video that sounded just like Higgins, telling his followers to buy Bitcoin. In the deepfake video, the impostor Higgins said, “I just invested $300 into Bitcoin and got $10,000 back. Gotta try it.”
Deepfake profiles are also being used on social media to push propaganda messages in favor of the Chinese Communist Party. According to The Centre for Information Resilience (CIR), “The coordinated influence operation on Twitter, Facebook, and YouTube uses a mix of artificial and repurposed accounts to push pro-China narratives and distort perceptions on important issues. The narratives amplified by the accounts are similar to those promoted by Chinese Government officials and China state-linked media.”
How Easy Is It to Create a Synthetic Identity Online?
Creating a synthetic identity online is not as difficult as someone may think. There are websites like https://thispersondoesnotexist.com/, which create artificially generated images of human faces.
In the image, there are inconsistencies we can spot.
- The pink color in the hair doesn’t look natural
- Her earring doesn’t look right
- The teeth don’t look natural
Thispersondoesnotexist.com has been used to create fake LinkedIn profiles that could be used to drop an infected file or link in a LinkedIn message.
Deepfakes will continue as a tactic of criminals and entities looking to negatively influence events and opinions. Reliable deepfake detectors remain under development by companies like Meta (Facebook), Twitter, and YouTube but a successful algorithm or technology is unclear currently.
Continued desire to be first on news stories without a thorough evaluation of the content will make it far more likely that deepfakes circulate without detection. Without careful diligence and analysis lies, false accusations, and careless defamation will occur with significant consequences.