How deepfakes threaten democracy — and what you can do to help

On July 26, Elon Musk uploaded a video to the social media platform X (formerly known as Twitter), narrated by what appeared to be Vice President Kamala Harris. Soon, however, it becomes clear that it wasn’t her. 

In the video, her sound-alike claims that she doesn’t “know the first thing about running the country” and that she is the “ultimate diversity hire.” This violated X’s policies but, of course, the post stayed up.

We live in a world where synthetic media (often) looks and sounds virtually identical to the real thing. Around the world, we’re already seeing deepfakes contribute to scams, extortion, and sexual harassment. Will they threaten democracy as well?

With the US election less than one hundred days away, and 2024 set to be the biggest election year in modern history, it’s crunch time to ensure that the electoral process around the globe is as strong as possible. Deepfakes may be one of the biggest threats we face — if not now, then soon. How might this happen? Here’s three chilling possibilities:

  1. Proliferating political disinformation

In 2016, we saw political disinformation threaten to influence the outcome of a major US presidential election. Multiple well-resourced actors — from politicians to industry front groups to foreign state actors — may have incentives to disseminate false deepfakes depicting political candidates doing and saying things that never happened.

This might just take the form of parody and personal attacks — already demonstrated by the deepfake of Kamala Harris shared on X by Elon Musk. But even more perniciously, it could be part of an attempt to genuinely sow false beliefs about candidates among the electorate, weakening our democracy and making everyone less informed.

  1. Undermining trust in legitimate information

You may also remember that, in 2016, audio clips emerged that purported to show Donald Trump speaking about women in a derogatory, predatory way. Three days after the release of the “Access Hollywood tape,” Trump admitted their veracity, saying that he was “not proud” of what he had said.

Had those clips emerged in 2024, how would the former president have responded? It’s possible that he could have denied the allegations, claiming the audio was simply deepfakes by his political opponents.

Lifelike synthetic media cuts through our shared sense of trust in multiple directions. On the one hand, it threatens to make us believe that false audio and video media depict actual events. On the other hand, we may lose our faith in real audio and video, knowing that it could easily have been generated by bad actors for nefarious purposes.

  1. Disrupting election day proceedings

Imagine it’s 7 a.m. on Election Day, and you see a Tweet in which your local newspaper/mayor/election official announces that the polls will stay open three hours later than planned. Or that they’ve changed locations. Or that there is an active shooter in the vicinity.

Or, imagine it’s the following day and surveillance footage from an anonymous social media account claims to show hundreds of ballots being discarded at a key polling location in a swing state.

Bad-faith political operatives know that manipulating voter turnout, or decreasing trust and knowledge of electoral processes, is one of the best ways to impact election results. Deepfakes are a new weapon in this arsenal. 

What can be done?

The main way that all of us encounter digital media — real and synthetic — is through social platforms like Facebook, Instagram, X, and TikTok. These platforms have a responsibility to protect users from deception in the form of deepfake videos, images, and audio. As the US election draws closer, these platforms must do more to ensure they have robust detection and moderation systems, that AI-generated content is clearly labeled, and that civil society has insight into the decisions they are making to protect democracy.

But if there’s one thing that the tweet from Elon Musk demonstrates, it’s that these companies aren’t doing enough.

That’s why The Midas Project is proud to join a coalition of partner organizations calling on tech companies to protect their users from deceptive or misleading synthetic media. To add your name, click the link below.