How fake is your reel?

In a vulnerable information ecosystem, deepfakes threaten to further blur the line between fact and fiction.

The face of fake

Previously, computer-generated imagery was not hard to spot: pixelation and artefacts were dead giveaways. It was mostly used by movie studios and took massive computing power to create.

Now, artificial intelligence is making it cheaper and easier to create shallowfakes — less sophisticated synthetic media — and deepfakes — highly realistic photos, videos or audio clips of events that never happened. People can be shown saying or doing anything. Entire personas can be manufactured.

Dramatic advancements have put these tools right in our pockets. Mobile apps like Avatarify and Wombo can animate photos, while websites like MyHeritage bring stills of historical personalities or deceased loved ones to life.

Which video is not a deepfake?

Click on the video you think is real

Wrong! They’re all deepfakes.

Sources: BuzzFeed, Ctrl Shift Face, The Guardian, Twitter

A weapon in the wrong hands

Deepfakes can have many beneficial applications. A museum in Florida created a deepfake of the painter Salvador Dali to take selfies with visitors, while an advocacy group “revived” Javier Arturo Valdez Cárdenas, a Mexican journalist who was murdered in 2017, to call for justice.

Audio deepfakes can also improve the dubbing of foreign-language films and television shows. They may even be life-transforming: a non-profit organisation, Project Revoice, uses voice-cloning deepfakes to restore voices lost to neurodegenerative disease.

But with the growing ease of making deepfakes, the potential for abuse cannot be underestimated. Anyone from lone actors to large networks can use deepfakes to damage individual reputations, disseminate revenge pornography, propagate disinformation or manipulate elections. Deepfakes may be deployed in the service of any agenda.

How deepfakes are used

The power of an image

For now, making convincing deepfakes still requires expertise, but even poor forgeries find huge audiences on social media. Although crudely doctored, a shallowfake of US politician Nancy Pelosi got millions of views on Facebook, her slurred speech giving the impression that she was drunk or ill.

Most of us do not look closely for signs of manipulation. We treat audiovisual footage as irrefutable proof, relying on it to make judgments and inform our actions. Leaked conversations and images instantly discredit politicians, while videos of police brutality drive us to demand reform. Deepfake technology leverages this instinctive trust in the medium to trick us.

Deepfakes are eerily effective tools of deception and are quickly turning into a powerful agent of social and political instability. Our capacity for constructive debate may be severely diminished as our collective grasp on reality becomes more precarious.

Answers not included

Given breakneck technological advancement, digital forgeries will only become harder to detect.

Technology firms propose safeguards, like embedding software in cameras to create video watermarks. But the threat runs deeper; deepfakes can reach and influence many in the time it takes to authenticate content.

Laws against publishing deepfakes are also hard to enforce and do not target their most pernicious effects. A year after China introduced regulations on distribution, fraudsters used deepfakes to hack government-run facial recognition systems.

Yet, facial identification systems are being introduced everywhere, from unlocking our phones to making payment for our purchases. With technology mediating more interactions and transactions, deepfakes have a greater potential to harm even the ordinary citizen.

The dangers are imminent and there is no clear solution yet. Whatever we opt to do, it will take a concerted effort from all aspects of society to stem this new agent of division.

Sources: The Guardian (1, 2), The New York Times, South China Morning Post, The Washington Post