Deepfakes explained

Deepfakes explained

With one view of this video, the shocking new reality of the deepfakes threat becomes clear; video manipulated through AI to create eerily realistic, yet falsified, footage of what appears to be a public figure, which could cause mayhem not just in the media, but in our acceptance of what is truth. The “truth” has been dying or dead for some time, the consensus arguably pointing to 2016 as ground zero. Yet with the rapid spread of deepfake videos on the Internet, even the video content we once took at face value may have us disbelieving our lying eyes. And while it might be fun to share a fake video of some celebrity, matters devolve drastically when illiberal governments, echoing Yuval Noah Harari’s prescient warning about AI enabling dictatorships, use these types of videos to spread misinformation.

Post-truth techniques

But first let’s define what a deepfake is. This Wikipedia definition states a “deepfake (a portmanteau of “deep learning” and “fake”) is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.”

Sadly, like many toxic nuisances that sprouted in the dark corners of the Internet and became popular, deepfakes originated on Reddit with hijacked pornographic videos. From there the technology as swiftly evolved. The Internet has been obsessed with transfixing celebrity images on other people, especially in potentially embarrassing situations. What people do in the privacy of their homes and what media they consume is their own business, yet the challenge of these fake videos spreading throughout cyberspace will prove to be a significant challenge when they are used to spread disinformation.

The most popular videos are of Donald Trump, not simply because his behavior dominates the media, but because there is so much video of him to alter. As this article on how deepfakes are created states, “there are several different ways to make deepfake videos, but they all require data to feed machine learning models that will generate your fake content. Currently, everyone from researchers to those with malicious intent are making deepfakes. The most convincing ones require powerful computer rigs and deep datasets with plenty of audio, video, or pictures for the subject of your deepfake.”

Popular beginnings

Snapchat created filters a few years ago which have grown in popularity through several popular social media platforms, opening us to this notion that our images could be altered; even if it is all in good fun the idea has the potential to lead to nefarious purposes. Like the spectre of AI getting out of control, the technology used to create these fake videos is outside government supervision and widely available. The damage that this unfettered technology poses is so worrisome that researchers are working to find ways to combat deepfakes before the 2020 Presidential elections.

How could this impact politics and security?

The Nazi’s propaganda minister Joseph Goebbels would have loved deepfake technology and this post-truth world we live in. We shudder to think what a brutal dictatorship in countries such as Syria or North Korea might do with this evolving technology. While the deepfake trend is still at its nascent stage, the AI powering it will improve, and the videos will seem more convincing. We can envision a scenario where a deepfake video surfaces in a region where what the Internet says is “true”, such as the Middle East. A similar scenario could occur of a video of a presidential candidate saying or doing something that puts them in a compromising scenario. Or worse, if a real video of something controversial is released in the media people would not believe it… claiming it is a deepfake.

A recent example is how a government-backed clergyman in Myanmar whipped up sentiment against the Rohingya, an oppressed ethnic group via Facebook using lies that were not refuted in time to stop the spread of violence. This is just another tool in the arsenal of those looking to manipulate reality in order to push their political agenda.

We are not helpless when it comes to the problem of deepfakes spreading on the Internet and distorting the truth. There have been active voices in Congress working to find a way to stop this spread of misinformation and tech giants such as Google who are working to combat deepfakes with their own research. If you feel you’ve seen a video on YouTube or one of the social networks which you feel is a deepfake first don’t share it (as tempting as that may be if you agree with it’s message) and make sure to flag the content. Our role as citizens stopping the spread of lies has thrust us into the position of taking individual responsibility, that’s just the world we live in today.

Human nature being what it is, we find the good and the bad in technology. We like to believe that all new technology is a net positive, but what technology does is amplify our behavior. In this light, new interpretations of technology such as deepfakes have the capability to erode democracy on a global scale one video at a time.