Technology: ‘ex ante regulation’ and litigation seek to answer concerns over tech harms
Tom WickerThursday 16 May 2024
Social media platforms are increasingly facing lawsuits alleging that their users are being harmed. Earlier in 2024, New York City filed a lawsuit against Meta and streamers Snapchat, TikTok and YouTube, alleging that their platforms were endangering children’s mental health, promoting addiction and encouraging unsafe behaviour. The lawsuit follows litigation by over 40 US states in October, who together allege that Facebook owner Meta has designed addictive products contributing to a youth mental health crisis.
While Meta, TikTok owner ByteDance and Google – which owns YouTube – didn’t respond to Global Insight’s request for comment, they’ve argued that the allegations are untrue and emphasised their safeguards against harm. Meta, for instance, stated its commitment to giving ‘safe, positive experiences online’ and has highlighted its tools to support families. TikTok’s website emphasises how youth safety is its ‘priority’ and how it doesn’t ‘allow content that may put young people at risk of exploitation, or psychological, physical, or developmental harm.’ YouTube has outlined ways in which it protects child mental health, such as crisis resource panels. Snapchat specifically referred Global Insight to its various safety protocols, which include detailed reviews of features and protections for under-18s.
Another example of the litigious trend is the 2023 case AM v Omegle.com, in which the plaintiff’s lawyers brought a product liability suit against chat website Omegle. They alleged that the site’s design was defective because the type of user-to-user harm their client suffered – online grooming – was foreseeable. AM v Omegle.com settled before court, but it came close to trial and the site’s founder subsequently shut down the site. He said that while the platform ‘punched above its weight in content moderation’, the ‘the stress and expense’ of fighting misuse were a significant factor leading to him shutting down operations.
Commentators have compared these lawsuits to those that helped to curtail the big tobacco companies in the 1990s. But in an industry defined by constant technological innovation, in the time it takes a lawsuit to work its way through the stages of a judicial system, there’s a question as to whether the issues at stake will have been overtaken by something else.
The answer is probably yes, says Angela Flannery, Vice-Chair of the IBA Communications Law Committee and a telecoms, media and technology partner at Australian firm Quay Law Partners. She highlights the slow progress in litigation such as that brought against Facebook by the Australian Privacy Commissioner in 2020, arising from the Cambridge Analytica scandal. The Commissioner alleges that over 300,000 Australians had their data harvested in breach of privacy laws, which Meta disputes. Flannery says that ‘hardly anyone even remembers what [the scandal] is now.’
The EU and the UK are taking a view on the idea of what you might call ‘conditional immunity’ […] they’re seeking to drive good behaviour forwards
Julian Hamblin
Senior Vice-Chair, IBA Technology Law Committee
A better alternative to ad hoc lawsuits, she says, is ‘ex ante’ regulation – forward-looking regulatory measures that impose norms and standards on social media platforms, seeking to anticipate – or accommodate – new developments and features. Here, governments have recently intensified their efforts. For example, Australia and the UK now have online safety legislation, which centres on new duties of care for online platforms and fines for non-compliance. There’s also the EU Digital Markets Act (DMA) and Digital Services Act (DSA), which impose obligations and prohibitions on 22 EU-identified ‘gatekeeper’ services run by heavyweights Google’s owner Alphabet, Amazon, ByteDance, Meta and Microsoft.
‘The EU and the UK are taking a view on the idea of what you might call “conditional immunity”’, says Julian Hamblin, Senior Vice-Chair of the IBA Technology Law Committee and a partner at UK firm Trethowans. Via these new regulatory frameworks, ‘they’re seeking to drive good behaviour forwards’, he explains. ‘They’re saying to these companies, “you have to self-regulate, but we’re going to set out the parameters in which you ought to do so.”’
The UK’s Online Safety Act doesn’t stipulate what measures platforms must put in place. Instead, the Act’s regulator, Ofcom, will be giving guidance and issuing codes of practice with clear recommendations and identifying proportionate steps to manage the risks.
Verónica Volman – a lawyer at Argentinian firm RCTZZ – has written on how the DSA should be taken as a basis for the regulation of social media platforms in other jurisdictions. Core to the DSA’s appeal, she says, is that it ‘doesn’t define what “illicit content” is, leaving that definition to each Member State.’ This is key, she says, because frameworks like the DSA shouldn’t intrude upon regulations enacted by other competent authorities by defining such concepts, given the importance of protecting freedom of speech according to each jurisdiction’s standards.
This interpretive ‘gap’ is also vital as technology and social media platforms evolve together. A test of the effectiveness of such regulatory ‘future-proofing’ will probably be the new era of potentially harmful content being ushered in by the misuse of artificial intelligence (AI) to create seamlessly believable fake media. ‘You’ve got some segue into that, in the EU, via the AI Act’, says Hamblin. ‘We’re a bit behind the curve in the UK.’ However, the UK’s Ofcom has ‘expressly acknowledged that AI is going to change the picture faster than legislation can be drafted, and that it’ll need to have flexibility within secondary legislation or in its regulatory powers to deal with it.’
In this regard, Flannery believes the EU’s DSA is still too prescriptive in its focus. She compares it with Australia’s Online Safety Act and its ‘Basic Online Safety Expectations’. She says that ‘if one of the things a platform must do is protect children from content that isn’t age appropriate, that principle applies whether or not it’s considered “real” – in terms of being footage of terrorist or violent activity – or if it’s generated by AI.’ However, she’s fundamentally sceptical that any financial penalty will ever be enough to persuade the social media heavyweights to do more than the minimum to comply.
The question of which approach works best looks set to only intensify as regulators’ relationships with platforms ‘become much more heated’, says Flannery. She points to the pushback in April from lawyers for X – formerly Twitter – concerning a request by Australia’s eSafety Commissioner to globally remove a video of a Sydney bishop being stabbed as the footage depicted ‘gratuitous or offensive violence with a high degree of impact or detail’. X geoblocked Australian users but suggests the request potentially establishes a precedent for censoring speech worldwide.
Creating long-lasting protections against social media harm may require fundamentally rethinking the regulatory-Big Tech relationship, says Lweendo Lucy Haangala, Vice-Chair of the IBA Platforms, E-commerce & Social Media Subcommittee and in-house counsel at international non-governmental organisation ActionAid. ‘Are we upskilling policymakers enough to even understand the issues they’re attempting to tackle?’ she asks. Haangala would like to see regulators working holistically, not confrontationally, with Big Tech. ‘If you don’t learn, you don’t evolve – no matter how big the penalty stick’, she adds.
Image credit: REDPIXEL/AdobeStock.com