Deepfakes – can the AI Act protect Europe?
Thursday 11 December 2025
Merlin Seeman
Hedman, Tallinn
CONFERENCE REPORT
IBA Annual Conference Toronto 2025, Wednesday 5 November

Merlin Seeman, Senior Vice-Chair of the IBA European Regional Forum, Natália Fritzen, Head of AI Compliance at Sumsub and Tony Gatterman, Head of Commercial Legal at Synthesia, at the 2025 IBA Annual Conference in Toronto, 5 November. Courtesy of Merlin Seeman.
At the IBA Annual Conference in Toronto, experts in AI policy, synthetic media and digital forensics examined a straightforward but uncomfortable question: ‘Deepfakes – can the AI Act protect Europe?’ The short answer is: not yet, and not on its own.
The discussion opened with the meaning of deepfake under European Union law. The AI Act adopts a deliberately broad definition, covering AI-generated or manipulated audio, video and images. Some argued that this breadth is necessary, as technology is moving too rapidly for narrow technical classifications. Others warned that the definition still misses a growing threat surface: AI-assisted falsification of documents, from emails and invoices to PDFs used as evidence. As one expert noted, focusing solely on multimedia risks leaves a large portion of AI-generated deceit outside of the regulatory framework.

Seeman, Fritzen, Gatterman and Tristan Jenkinson, Head of Forensics and Investigations at Sky Discovery, speaking at the 2025 IBA Annual Conference in Toronto, 5 November. Courtesy of Merlin Seeman.
The EU’s transparency-first approach
The AI Act treats deepfakes through the lens of fundamental rights. Its main regulatory tools are transparency obligations. Providers must ensure their systems can disclose that outputs are AI-generated, and users must label synthetic content when they share it. But the panel stressed several weaknesses. Watermarking is fragile and easily stripped. Bad actors will never self-identify their synthetic content. And the Act offers no remedies for victims, no penalties for malicious use, and only broad exceptions for satire and creative works, raising uncertainty about how the rules will apply in practice.
Member states are moving more quickly
Because of these gaps, some EU countries are already experimenting with stronger measures. Italy has introduced criminal liability for distributing non-consensual deepfakes. Denmark is considering a likeness-based copyright model, giving individuals enforceable rights over deepfake misuse of their image. These approaches complement, rather than contradict, the AI Act, but they highlight how much regulatory work remains at the national level.
Beyond the EU: fragmentation and control
Outside of the EU, regulation is uneven. In the US, a patchwork of bipartisan federal bills and state laws targets different aspects of deepfakes (likeness rights, fraud, intimate and election-related content), but there is no unified regime. China takes a more centralised, application-specific approach. Its ‘deep synthesis’ rules require consent and identity verification for deepfakes of real people, mandate watermarking and ban content deemed harmful to national or social interests.
The real-world risk: criminal creativity
Across jurisdictions, criminals are adopting synthetic media more rapidly than regulators can respond to it. Examples span the full range:
- high-value impersonation – CEO fraud using AI-generated voices or video calls where multiple participants were fabricated;
- everyday deception – manipulated photos in Airbnb disputes, forged defects in marketplace transactions, AI-generated invoices and PDFs submitted as evidence;
- psychological exploitation – voice-based scams preying on vulnerable individuals, enabled by models which only need seconds of recorded speech.
Watermarking and takedown tools, whether through the EU Digital Services Act or the US Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act or ‘Take It Down Act’, provide partial support, but remain reactive and depend on victims reporting harm. Meanwhile, open-source tools continue to make watermark removal trivial.
Could an EU-wide AI Act provide protection?
The panel’s consensus was measured. The AI Act is a meaningful foundation, particularly for setting transparency norms and shaping responsible industry behaviour. But on deepfakes, it is only a beginning. Effective protection will depend on:
- detailed secondary legislation;
- complementary national rules;
- strong corporate governance from AI providers; and
- lawyers’ ability to recognise the warnings of synthetic evidence.
As one expert put it: ‘There is no universal remedy. Regulation will always lag behind technology, and deepfakes are evolving at a speed laws were not built for.’
The EU is not unprotected, but unless regulation, enforcement and legal practice evolve together, protection may remain more illusion than reality.

Gatterman, Fritzen, Seeman and Jenkinson, at the 2025 IBA Annual Conference in Toronto, 5 November. Courtesy of Merlin Seeman.