With the pandemic raging and social unrest surging around the globe, wildly divergent platforms on science, human rights, immigration and more split political parties and populaces. There’s much at stake in these troubled times, and all eyes are on upcoming political elections that will determine nations’ policy directions.
But with so much at stake, people are wondering: Can we trust elections to be free and fair?
The prevalent opinion is “No.” You don’t have to look far to find evidence of why people feel this way. Social media is used to influence us, while disinformation and misinformation are being used to confuse us. According to a poll from PBS NewsHour/NPR/Marist, about a third of American voters worry that they can’t spot misleading stories on social media and that it poses the biggest threat to the security of elections.
To understand how national elections affect digital privacy—and vice versa—Avira surveyed a representative sample of citizens in three nations with elections on the horizon: the United States, Germany, and Hong Kong. Attitudes varied from country to country, but respondents in all three regions shared one thing: they agreed that upcoming elections won’t be free and fair. The main hindrance, according to a majority of 2,000 survey respondents, is misinformation on social media—a concept that sprang to public attention with the 2016 elections and which is still rife. Germans have the most faith in a fair democratic process, with 38% believing the elections will proceed free and fair, compared with 28% in Hong Kong and 24% in the US.
Small wonder that survey respondents believe their elections will be tainted: Facebook, for its part, is still yanking networks of coordinating accounts that it linked to Iran, Russia and election meddling. For example, in October 2019, it found one network targeting the 2020 US presidential elections that appeared to be linked to the Russian troll agency known as the Internet Research Agency (IRA): the operation that concocted a slew of cardboard cutout accounts to churn out political content and divisive blogs in 2016.
In the following, we break down sentiments on technology’s role in the elections, digital privacy, and misinformation. To lead off, below is a definition of misinformation, and an explanation of how it’s a cybersecurity threat.
Misinformation vs. disinformation
The terms are often conflated, but there’s an important distinction: the presence or lack of intent to spread falsehoods. Misinformation is defined as falsehoods that are spread, regardless of intent to mislead. One example: if your unsuspecting cousin shared a fake conspiracy theory about Hillary Clinton sexually abusing children in satanic rituals in the basement of a pizzeria, he could have been a link in a chain that led to a man being inspired to arm himself with an AR-15 semiautomatic rifle and drive to a Washington pizzeria, expecting to confront a child sex ring.
Disinformation, in contrast, is created and shared with the intent to mislead. The distinction between the two terms is muddied by the fact that disinformation can evolve into misinformation, depending on who’s sharing it and with what aim. For example, a political campaign can strategically spread news that they know is fake. That’s disinformation. When people read and share false or misleading articles, photos and memes, believing they’re true, it’s called misinformation.
Why and how is this a cybersecurity issue?
Besides an armed man showing up in real life with an automatic weapon to confront supposed players in an internet-spread, fake conspiracy, mis/disinformation leads to myriad cyberthreats. We’ve seen political campaigns trying to sway votes, discredit opponents, and engage in voter suppression or selective get-out-the-vote campaigns. Then too, there are independent threat actors, unaffiliated with nation states or campaigns, that exploit elections to earn money. That’s nothing new: They do it with any attention-grabbing headline, such as when scammers have tried to sell non-existent puppies during the pandemic lockdown. Similarly, we’ve seen scammers use phony political fundraising robocalls that try to trick targets into “donating” to a candidate or cause.
We’ve also seen state-sponsored actors that use disinformation campaigns to conduct political sabotage. One example was a fake tweet from Senator Marco Rubio claiming that a purported British spy agency planned to derail the campaigns of Republican candidates in the 2018 midterm elections. That fake tweet, believed to have been created by a Russian group, was picked up and falsely reported as real by RT, a Russian state-controlled news outlet. According to CNN Business, RT wasn’t believed to have coordinated with the group … but nor did the outlet issue a correction.
It’s inarguable: Misinformation is a clear and present danger to free and fair elections. To explain how people view this and other threats to elections,read the key takeaways from the Avira commissioned survey.
You can also read this blog post for tips on how to identify fake news and misinformation.