Technology: legislators fight disinformation as major elections approach
As numerous democracies around the world prepare to vote in elections in 2024, legislators in some jurisdictions are putting in place measures to tackle electoral disinformation, while courts and regulators are also taking action.
The challenge of disinformation is significant given that falsehoods can spread quickly online. The problem has become more acute as technology has advanced and disinformation has become increasingly sophisticated. Certain content posted on social media, for example, is often not clearly false, but instead represents a distortion of the circumstances. ‘It deliberately plays with possible interpretations and the perception of polarising topics’, says Marc Hilber, Member of the IBA Technology Law Committee Advisory Board and a partner at Oppenhoff, Cologne.
Such content presents significant risks to electoral processes, undermining voters’ abilities to make informed decisions. This leads to political disengagement, particularly where individuals feel they’re unable to discern the truth. At a societal level, disinformation can erode public trust in institutions and the media, fostering scepticism about the credibility of democratic governance.
The EU’s Digital Services Act (DSA) aims to prevent illegal and harmful activities online, including the spread of disinformation. It came into effect in August for very large online platforms and online search engine providers of a similar size, and has applied to all intermediary service providers since mid-February – in time for the EU Parliament elections in early June. Included in the legislation is the requirement that online platforms and search engines with more than 45 million monthly average users and which have been designated by the European Commission must take measures against disinformation and election manipulation.
The DSA subjects those online service providers that are categorised as ‘very large’ to its most stringent obligations, requiring them to create mechanisms for users to report content they deem to be dubious or unlawful. When such content has been brought to the service provider’s attention by national regulatory authorities or courts, they’re required to remove it quickly and efficiently. Of particular note is that the DSA also requires designated platforms to analyse the systemic risks posed by their operations, particularly on civic discourse, the electoral process and public security, while ensuring that freedom of expression is protected.
There’s a need for international cooperation between companies, as was recently the case at the Munich Security Conference
Marc Hilber
Member, IBA Technology Law Committee Advisory Board
Although the DSA doesn’t impose requirements on service providers to actively monitor content, it includes a type of provision – more typically found in US legislation – known as a ‘Good Samaritan’ clause. Through this, it offers an incentive to online intermediary services to carry out proactive monitoring activities, such as voluntary investigations, into illegal content by providing limitations to a service provider’s liability when it does so.
The European Commission has said that, since last August, the approach towards electoral integrity by those platforms that have been within the DSA’s remit has changed. The Commission highlights that providers are responding more quickly to content being flagged by local authorities and trusted partners, and clearer escalation processes for tackling disinformation and misinformation are now in place.
Martin Husovec, an associate professor of law at The London School of Economics and Political Science Law School, believes the DSA is sufficiently future-proofed to deal with election interference. He says that the key element, however, is that national law must also define when election interference takes place and create institutions that can assist platforms with understanding the local threats and context. ‘The DSA offers robust procedural tools once such acts are illegal, and fewer tools when such election interference is not illegal,’ he says. ‘The key obligation consists of timely and consistent removal of notified illegal content or conduct, which includes addressing inauthentic behaviour and similar coordinated attempts to influence elections.’
Political deepfakes – synthetic media digitally manipulated to replace an individual’s likeness convincingly with that of another, and which might therefore be used to spread electoral disinformation – and the malicious use of artificial intelligence (AI) appear to pose the greatest risk to elections at the current time. ‘It is becoming harder and harder to spot deepfakes, making it more difficult to distinguish what’s real and what’s not,’ says Robyn Mohr, a partner at Loeb & Loeb, in Washington, DC.
Deepfakes in particular have emerged as a major concern in the lead-up to the 2024 elections. In February, the US Federal Communications Commission (FCC) banned robocalls that use AI-generated voices. This was in response to a spate of deepfake robocalls impersonating President Joe Biden, which were being used to discourage people from voting in state presidential primaries.
Mohr commends the FCC for taking action so quickly. ‘My hope is that the FCC’s rule change serves as a signal that these types of behaviors have real consequences’, she says. ‘But, while a rule change is about as much as the FCC can do, I’m not sure it’s enough. Laws and rules are helpful, but industry also has a role to play here to help ensure their technologies are not being used in antidemocratic ways.’
Elsewhere the courts are becoming involved. For example, the Berlin Regional Court recently issued a temporary injunction prohibiting the publication of a deepfake video in which an audio track featuring an AI-generated voice was overlaid onto a speech by German Chancellor Olaf Scholz; this voiceover appears to announce that the German political party Alternative für Deutschland has been banned.
Sonia Cissé, a partner at Linklaters in Paris, highlights the relative ease and low cost of producing deepfakes. Their use in depicting politicians can lead to tarnished reputations and skewed public perception, she says, adding that ‘the proliferation of deepfakes could undermine the credibility of audiovisual media as a trusted source of information.’
An open letter, published on 21 February and signed by AI and tech industry experts, calls for further regulation of the creation of deepfakes given the potential risks to society, including elections. The statement recommends establishing criminal penalties for anyone who knowingly creates or facilitates the spread of harmful deepfakes, as well as legislating to require software developers and distributors to prevent their audiovisual products from creating them.
Tobias Kollakowski, a junior partner at Oppenhoff, says that laws and their enforcement can’t alone combat the spread of disinformation to manipulate elections, given that ‘technological change is faster than a generally lengthy legislative process.’ Additionally, Hilber adds, there must be international cooperation at government level, while there’s also a ‘need for international cooperation between companies, as was recently the case at the Munich Security Conference, where leading technology companies pledged to cooperate in recognising and combatting harmful AI content.’
Image credit: simplehappyart/AdobeStock.com