There is something far more terrifying than “fake news.” It’s known as “deepfake.” It takes fake news one step further and can literally destroy a person’s life, livelihood, and reputation.
Deepfake is a phony video. It uses AI (artificial intelligence) to make a person appear as if he did or said something he never did. We’re seeing more and more deepfakes on social media, and while it can be amusing and used in good fun, it can also wreak havoc. Just imagine a deepfake of an influential figure saying or doing something controversial going viral.
Last April, Buzzfeed teamed up with film director Jordan Peele to produce a video of former President Obama opining on the Black Panthers and calling President Trump a derogatory name – except that Obama didn’t say any of what the video showed him saying. The entire video was fake. Peele ventriloquized Obama’s voice and used an AI tool called FakeApp, as well as Adobe After Effects, to create a video mimicking Obama.
Buzzfeed and Peele made the video to warn people about the dangers of deepfake, and it was, indeed, a disconcerting sign of AI’s power to generate false information.
Not long ago, a small political party in Belgium also made a deepfake. It produced a video of President Trump encouraging Belgium to withdraw from the Paris Climate Accord. The video was met with anger and outrage – except that it was a fake. The political party was forced to explain that it was a spoof aimed at encouraging Flemish voters to sign a petition to get the Belgium government to enact more urgent climate action.
Until recently, deepfakes were largely limited to the AI research community and the film industry. Thanks to the release of an app called FakeApp, however, anyone with Internet access and images of a person’s face can now create a deepfake.
It’s not so simple, of course. Creating a truly convincing deepfake currently requires expertise and data. But that can easily change, and many experts are concerned that soon anyone will be able to make it appear as if something has happened that didn’t.
What if someone were to release a deepfake of an influential figure saying something xenophobic? The reaction would likely be swift and fierce. The individual would lose his credibility, reputation, and perhaps even his job – unless he could prove it wasn’t him. But as deepfakes become more advanced, it will become increasingly difficult for deepfake victims to do so.
Deepfake in court cases may lead to even more complications. Videos are among the best forms of evidence, but without a sure way of verifying a video’s authenticity, due process can be compromised. While technologists are working on methods to detect deepfakes, relying on forensic detection alone is not a viable long-term solution since a simple change in algorithms can circumvent detection methods.
Deepfake is especially problematic in light of a recent poll, conducted by Common Sense and SurveyMonkey, which analyzed the news sources of 1,005 adolescents between the ages of 13-17. The poll found that 54 percent of them received their news at least a few times a week from social media platforms such as Instagram, Facebook, and Twitter, and 50 percent got their news from YouTube. Of those who get their news from YouTube, 60 percent were more likely to get their news from celebrities and influencers – people whom deepfakes can use to mislead the populace.
Unfortunately, little can be done to prevent deepfakes from proliferating. Members of Congress are mulling legislation, but ensuring that whatever legislation is passed doesn’t infringe on free speech or impact public discourse is a challenge.
As the 2020 presidential election cycle intensifies, being aware of deepfakes is essential. Before taking something at face value, ensure that its source is credible. Just because something looks real doesn’t mean it really is.