EDITOR'S PICK
Apr 16, 2024
2024 is a big year for elections. That helps explain why, according to the World Economic Forum, misinformation and disinformation rank among the year's top risks. AI is only serving to exacerbate that danger.
2024 is a big year for elections. Dozens of parliamentary and presidential elections take place, including in the United States, the United Kingdom, India, Brazil, Indonesia, and Mexico, and the threat of cyberattacks and artificial intelligence-driven disinformation has never been higher.
According to the World Economic Forum’s latest Global Risks Perception Survey report, misinformation and disinformation are top risks, and the trend of seeking to influence voters ahead of the election and undermine the legitimacy of the results will doubtlessly continue and intensify.
The expected surge of deepfakes
As the technology behind deepfake audio and video production matures and becomes more widely available, we should expect to see more examples spreading over social media in the coming months. Recent examples include the “robocall,” which used Joe Biden’s voice to discourage voters from voting in the New Hampshire primary in January 2024. Similar practices were more maliciously put to use in Slovakia and the U.K.
During the Slovakia election in September 2023, an audio clip purported to be a recording of Michal Simecka, who leads the liberal Progressive Slovakia party, discussing how to rig the election. A second clip used Simecka’s voice to spread fake news that he planned to double the price of beer in the country if he won. The identity of the attacker was not proven, and the pro-Russia candidate, former Prime Minister Robert Fico, won the election.
Last October, the leader of the U.K.’s Labour Party, Sir Keir Starmer, was the victim of an audio deepfake released to coincide with the first day of the party’s annual conference. The clip appeared to capture him swearing at staffers. The situation was exacerbated by X, formerly Twitter, refusing to take the clip down because the Labour Party was unable to provide sufficient evidence that it was fake.
Audio deepfakes seem more likely to be troublesome than video, which might be easier to spot as manipulated, at least in the short term. Deepfakes have also used public figures to date but could, in the future, target election workers in highly contested districts, which may be harder to dismiss quickly. Counterfeit websites can be built to support claims and spread further disinformation, hosted online, and distributed more quickly than ever, amplifying deepfakes’ reach.
The Associated Press recently shared a guide on how to spot deepfakes, which all of us should study.
Politicians and experts around the world, but especially in the U.K. and U.S., where concerns about election manipulation are at their highest, have been calling for regulations to stop the creation and spread of deepfakes. A letter signed by hundreds of leaders in the AI community in February of this year called for criminal penalties for those creating and spreading damaging content. However, even if new rules were implemented in time for the election, there is little confidence it would make any difference.
Threat of hacking
Cyberattacks remain a possibility, and politicians, their families, staffers, and party officials have likely been targeted on an ongoing basis over the last few years. The absence of reporting on stolen information from personal and work devices does not mean it has not happened; attackers may wait for the most opportune moment in the election cycle to leak any information. Attacks against devices that have gone undetected for months or years could still result in damaging leaks.
While much of the focus of election interference in the 2020 U.S. elections was on Russia, and will likely remain on Russia for the 2024 election, other countries, political groups, and individuals may also be motivated to use their resources to influence voters or disrupt the process. Recent reports detail how China interfered in Canada’s 2019 and 2021 federal elections, and there is evidence of China’s intent to interfere with the 2024 U.S. election. Both China and Russia are adept at long-term hacking campaigns.
Other state actors, including Iran and North Korea, may also seek to interfere with elections around the world, but some hacking may come from within the country as partisans seek to disrupt the opposition. Several countries will hold elections that are considered neither free nor fair.
Voting infrastructure targeted
Voting machines may be another key target for state-sponsored hackers. Compromising, or appearing to compromise, the security of voting machines during the U.S. election would add fuel to the fire smoldering since former President Trump made allegations of voting fraud following his 2020 defeat. Where evidence was lacking last time, actual evidence of attacks this time could be used to cast renewed doubt on the 2020 result.
The Cybersecurity and Infrastructure Security Agency (CISA) has been preparing for such attacks. The #protect2024 website contains a large amount of protective security content for state and local election officials to improve security hygiene, increase the security of systems, and plan for incident response. The Elections Infrastructure Information Sharing and Analysis Center (EI-ISAC) should prioritize communications and sharing of intelligence among election officials in the U.S. and other countries will likely have similar groups.
The work done by ethical hackers via the Election Security Research Forum and MITRE to examine hardware and software used by election technology manufacturers for vulnerabilities is of particular value. Fully vetted cybersecurity researchers and officials worked together to identify problems and fix them, heading off the potential for exploitation later in the year.
Distributed denial of service (DDoS) attacks have been used in attempts to disrupt voting infrastructure, including temporary outages during the 2022 U.S. midterm elections. However, the impact is limited and is unlikely to stop votes from being cast.
In the January 2024 Bangladeshi elections, an app created by the Bangladesh Election Commission to provide voters with information on candidates and historical data was targeted by unknown attackers, causing the app to run slowly. Ahead of the same election, the telecoms and media industries were also heavily targeted by DDoS attacks, which were thought to be an attempt to slow the flow of information to voters.
Finally, we should not rule out the possibility of insiders seeking to undermine election security. Insiders could use their access to manipulate or destroy election data, including voter registration data, or access election systems or data. They could also attempt to steal or interfere with election infrastructure hardware or leak information about voters publicly. CISA DOC.
A microcosm of cybersecurity and AI issues
The issues affecting election security are a microcosm of cybersecurity and AI issues affecting all fields: the number of threats and risks that must be managed and mitigated is growing exponentially, and attackers will always have the advantage by exploiting technology more quickly than defenders.
Lessons will be learned and shared from the incidents that affect elections this year, but we, as security professionals, must be vigilant in understanding how attackers could tweak those incidents to threaten businesses, financial markets, and critical infrastructure. We should then apply mitigations wherever possible until we can use AI to counter attacks before they damage us.
What to read next
Recommended