Skip to main content
article

Protecting democracy in the age of deepfakes

By Rana Ali Adeeb

Source: Media Relations

This article was originally published in The Gazette

Emotionally intelligent algorithms and blockchain technology offer hope, but the solution ultimately lies in a combination of technology, education and regulation.

As we approach the U.S. election, a new and dangerous wave of AI-generated misinformation is sweeping across the digital landscape, raising the stakes higher than ever before.

Open more share options

broken-phone

In an age where information shapes public opinion, a question that reigns supreme is:  Can we trust the information shaping our reality?

Misinformation, ranging from fake news headlines — like a random Facebook post in which someone accused a Haitian immigrant of stealing and eating their neighbour’s daughter’s friend’s cat — to deepfakes, like those of Elon Musk in fraudulent cryptocurrency giveaways — have the potential to sow confusion, deepen polarization and undermine the very foundations of democracy.

What is truly insidious about fake news and deepfakes is their exploitation of a key vulnerability in human psychology: people’s emotions. Studies show that when people are emotionally charged — whether positively or negatively — they are more likely to share content without critically evaluating it.

According to a 2019 study, during the 2016 US election, 8.5 per cent of Facebook users shared at least one piece of fake news. Deepfakes, which manipulate the likeness of real people with disturbing accuracy, drive this up a notch by blurring the line between truth and fiction.

Imagine a viral video of a public figure giving a divisive speech that later turns out to be fake. By the time the truth comes to light, the damage is done — the emotional response has already entrenched divisions, misled the public, or sparked support for a false cause.

According to Forbes, more than half a million deepfake videos were in circulation on social media in 2023 — a testament to the fact that platforms are struggling to detect fake content fast enough to prevent its viral spread.

The rapid pace of social media consumption exacerbates this issue where the interactive nature of the social media platforms themselves accelerates the speed at which these deepfakes are viewed and shared by users in near real-time.

With deepfakes becoming more sophisticated, these will inevitably become more difficult to detect and control, and falsehoods will continue to spread faster than corrections can be made.

So, what can we do to protect ourselves from the growing threat of deepfakes?

One promising solution lies in emotionally intelligent algorithms — AI systems designed to detect and down-rank manipulative content. These systems would learn to flag content aimed at misleading or emotionally manipulating users before it goes viral. While platforms like Facebook and X are making strides in this direction, the technology still lags behind the rapidly evolving world of deepfakes. What is needed are AI systems that can work in real-time, learning from user engagement patterns and detecting deepfakes as soon as they appear.

Another approach is blockchain technology, which could provide a way to verify the authenticity of videos and images by creating an immutable record of their origins. Platforms could use this technology to ensure that users can trace content back to its source. While still under development, blockchain verification could play a role in distinguishing real from AI-generated deepfakes.

Finally, stronger regulations and policies need to be enforced, especially when it comes to creating and distributing deepfakes. California laws banning deepfakes intended to deceive voters during elections are a good start, but we need comprehensive, global legislation to truly address the issue. One example could include mandating watermarks or digital signatures on AI-generated content to distinguish real from fake.

Deepfakes pose a real threat to democratic processes. Emotionally intelligent algorithms and blockchain technology offer hope, but the solution ultimately lies in a combination of technology, education and regulation.

Nobel Laureate Maria Ressa’s warning about the erosion of trust in media and institutions feels especially urgent. As she aptly stated, “Without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality, no democracy, and it becomes impossible to deal with our world’s existential problems.”

We must all remain vigilant about the content we consume and act now to safeguard our democratic systems.

Rana Ali Adeeb is a doctoral candidate and a 2024-2025 public scholar at Concordia University.




Back to top Back to top

© Concordia University