Triumph of Simulacra – How Deepfakes Aim to Rule Our Minds

Deepfakes are famous for fake pornography and YouTube videos with dancing politicians. But how can else they challenge our society?

Menace from the 90s

According to Antispoofing Wiki The deepfake technology can be traced back to 1997 when the first digital face manipulation tool Video Rewrite was presented by the Interval Research Corporation. Funnily, the first deepfake in history was political — it made JFK lip-sync to a “I never met Forrest Gump” phrase.

In 2017 deepfake videos have turned into a mainstream threat as their production tools became widely available to the common users. Someone, under the alias “deepfake” posted on Reddit a few pornographic videos. In them, faces of a few Hollywood actresses — Gal Gadot being one of them — were glued to the real adult genre divas with the dark wizardry of generative deep learning. This is how the deepfake era began. 

Currently, deepfakes are considered as the gravest threat coming from AI and machine learning technologies. Crime Science reports that deepfakes are capable of producing devastating societal harm: from political slender and fake news to petty money thefts via realistic impersonations.

The study also mentions that deepfake technology proliferation is simple to orchestrate: it can be quickly shared, sold, and copied by the perpetrators. (Unlike physical crime tools like guns — these require covert logistics.)

So, why are deepfakes so dangerous?

Destructive Qualities of Deepfakes

Falsified media can cause unpredictable results. For example, deepfake allegations nearly sparked an upheaval in Gabon. The military top ranks accused the president’s administration of using a synthesized video of the country’s leader Bongo Ondimba who, supposedly, died from a heart attack sometime earlier in 2019.

Allegedly, to avoid losing power, the corrupt officials quickly whipped up a New Year’s deepfake address that would soothe the suspicious public and help them win some time.

Picture: Bongo’s alleged usage of Botox and poor health caused deepfake speculations among his rivals

The fabricated rumors were used by the national guards as a pretext to seize the central radio station — they pleaded for the citizens to stop whatever they were doing and flood the streets in righteous anger. However, the coup d’état failed.

Audio deepfakes seem to be an equally serious threat. In the UAE a massive heist was orchestrated with the help of a voice-cloning tool. Fraudsters mimicked a company director’s voice and successfully requested a $35 million transfer from a Hong Kong bank.

The pressing issue of deepfakes spurred regional and international alarm. For instance, the European Parliament published a study Tackling Deepfakes in European Policy. The document lists among all other risk categories brought by the technology: bullying, extortion, identity theft, election and stock-price manipulations, etc.

However, one of the most destructive properties of deepfakes are the liar’s dividend and truth apathy. While some are paranoid that one day they will be targeted by the odious technology and jeopardized beyond any belief, others can rejoice. Deepfakes will finally allow them to refute any compromising materials.

Liar’s dividend can produce a scarily damaging impact on our society. The paradigm don’t believe what you see can actually help some unscrupulous politicians and public figures wiggle out of a scandal.

Even though the legitimacy of a video or audio can be confirmed with technical means — like double compression analysis — regular observers are often distrustful of the expert verdict. It’s always easy to discard something you don’t really comprehend.    

If the liar’s dividend is Phobos, then reality apathy is Deimos in this duet. Not being able to trust their own senses, people may ignore actually important materials. As long as there’s no reliable, trustworthy and universally available way to tell a fake from bona fide media, deception will prevail over common sense.

A Challenger Appears

Deepfake isn’t alone: it has a sibling called “cheapfake”. Cheapfakes are a type of falsified media that are easy, cheap and quick to produce. Con artists don’t even need to operate neural networks to make them.

They can churn out cheapfakes in gargantuan amounts with simple editing tools: Movie Maker, Adobe Premiere/Audition and of course Photoshop. The famous “drunken Pelosi hoax” is a textbook cheapfake. It was produced by simply slowing down the speed of the original video, making the target appear intoxicated.

Yes, cheapfakes wouldn’t get their moniker for nothing. They are cheap indeed. And quite easy to spot too, like in the Pelosi hoax case. However, in certain areas where technological literacy leaves a lot to be desired, cheapfakes can lead to tragic events.

In 2018 a series of cheapfakes began circulating in the Indian WhatsApp group chats. It showed motorcycle riders “kidnapping” children for organ harvesting. It was accompanied by some really gruesome footage of dead kids “killed by the harvesters”.

It promptly stirred a panic and paranoia in the villages of Karnataka, Maharashtra and other Indian states. Villagers assembled in lynching mobs and attacked random outsiders, tourists and bikers — at least 20 random people got killed in light of this hoax.

In reality, the cheapfake used a recontextualization technique presenting some irrelevant footage in a completely different light. For instance, the images of the dead children were captured a few years prior to the hysteria to document war casualties among kids. As for the “bike-riding kidnappers”, it was simply a clip withdrawn from a social advertising that warned parents of how easy it is to abduct a child.    

Countermeasures

Experts indicate that lack of awareness and technological illiteracy are the two main factors that sparked mass lynching in India. Another vital factor is that social media and messenger apps are ideal channels for the deepfakes and cheapfakes to proliferate. It makes them similar to a viral disease.

Right now, there are just a handful of methods to neutralize false media. First, researchers recommend paying attention to visual clues: unnatural facial feature alignment, weird complexion, posture, gestures, lip movement/voice mismatches. Plus artifacts (such as distortion or blur) can be spotted in areas where one body part transitions into another: neck, elbows, wrists, etc.

Second, we should mention the Content Authenticity Initiative (CAI) proposed by Adobe. This initiative seeks to establish standards, as well as introduce a universal platform that will protect original media content from malicious tampering. This is achievable by inserting unerasable metadata — the special data that reveals who, where and when produced the content.

But of course these countermeasures won’t work solo. They need strong support from educators around the world. Starting in schools and finishing in communities living in the less developed regions. Ignorance is a breeding ground for many negative phenomena. And deepfakes are one of them.