Improving (anti-)social media behaviour

Tom Wicker

Legislators and regulators have intensified their efforts to drive social media platforms towards ‘good behaviour’ in response to concerns about the harms they cause, as Global Insight reports.

Meta, owner of social media platform Instagram, announced in September that it would be rolling out new ‘teen accounts’ for under-18s. These accounts will enable parents to set daily time limits for use, see who their children are messaging and what they’re searching for. While Instagram already has stricter settings for teenage users, they’ll now need parental permission to change these. Nick Clegg, Meta’s President of Global Affairs, said the aim was to ‘shift the balance in favour of parents’.

Meta’s announcement comes at a time of growing concern about the potentially harmful consequences of social media usage – and whether the powerful owners of popular platforms such as Instagram, Facebook, Snapchat, TikTok and WhatsApp are doing enough to combat these risks.

Social media platforms are increasingly facing lawsuits alleging that their users are being harmed. In early 2024, New York City filed a lawsuit against Meta and streaming services Snapchat, TikTok and YouTube, alleging that their platforms were endangering children’s mental health, promoting addiction and encouraging unsafe behaviour. The lawsuit followed litigation by dozens of US states in autumn 2023, who together allege that Meta – which also owns Facebook – has designed addictive products contributing to a youth mental health crisis.

While Meta, TikTok owner ByteDance and Google – which owns YouTube – didn’t respond to Global Insight’s request for comment, they have argued that the allegations are untrue and emphasised their safeguards against harm. Meta, for instance, stated its commitment to giving ‘safe, positive experiences online’ and has launched tools to support families, including the aforementioned ‘teenage accounts’.

TikTok’s website emphasises how youth safety is its ‘priority’ and how it doesn’t ‘allow content that may put young people at risk of exploitation, or psychological, physical, or developmental harm’. YouTube has outlined ways in which it protects child mental health, such as crisis resource panels, while in September it introduced ‘a new supervised experience […] that will give parents and teens the option to link their accounts and get shared insights and notifications’. Snapchat referred Global Insight to its various safety protocols, including reviews of features and protections for under-18s.

From algorithms to ex ante

We ‘live in an algorithmic society’, says Daniela De Pasquale, Secretary-Treasurer of the IBA Technology Committee and a partner at Ughi e Nunziante Studio Legale in Milan. And while the algorithms that underpin the search functionality of social media platforms are, she emphasises, ‘neutral in of themselves, just the latest technology’, what’s now at stake for the companies that own them is ‘transparency over the governance of the design of such technology’.

Commentators have compared the current spate of lawsuits to those that helped to curtail the big tobacco companies in the 1990s. But in an industry defined by constant technological innovation, in the time it takes a lawsuit to work its way through a judicial system, there’s a question as to whether the issues at hand will have been overtaken by something else.

The answer is probably yes, says Angela Flannery, Senior Vice-Chair of the IBA Communications Law Committee and a telecoms, media and technology partner at Australian firm Quay Law Partners. She highlights the slow progress in litigation such as that brought against Facebook by the Australian Privacy Commissioner in 2020, arising from the Cambridge Analytica scandal. The Commissioner alleges that over 300,000 Australians had their data harvested in breach of privacy laws, which Meta disputes. Flannery says that ‘hardly anyone even remembers what [the scandal] is now’.

A better alternative to ad hoc lawsuits, says Flannery, is ‘ex ante’ regulation – forward-looking regulatory measures which impose norms and standards on social media platforms, seeking to anticipate, or accommodate, new developments and features. Here, governments have recently intensified their efforts. Australia and the UK now have online safety legislation, which centres on new duties of care for online platforms and fines for non-compliance. There’s also the EU Digital Markets Act (DMA) and Digital Services Act (DSA), which impose obligations and prohibitions on 24 EU-identified ‘gatekeeper’ services mostly run by heavyweights Alphabet – which owns Google – Amazon, Apple, ByteDance, Meta and Microsoft.

There has been growing recognition globally, including in Africa, that transparency is key to effective regulation

Lweendo Lucy Haangala
Vice-Chair, IBA Platforms, E-commerce & Social Media Subcommittee

‘There has been growing recognition globally, including in Africa, that transparency is key to effective regulation’, says Lweendo Lucy Haangala, Vice-Chair of the IBA Platforms, E-commerce & Social Media Subcommittee and in-house counsel at international non-governmental organisation ActionAid. ‘The DSA, for instance, pushes for more transparency in how algorithms decide what content users see’, she says, while highlighting Africa’s Digital Transformation Strategy, which seeks to harmonise regulations across the continent.

Countries such as Malawi and Nigeria are making strides towards regulating misinformation and defamation on social media platforms, says Haangala. However, she adds, ‘enforcement can be difficult without a collaborative approach to ensure regulations are both effective and balanced’.

It’s easy to be cynical about the prospect of this happening. However, de Pasquale says that ‘most nations’ thought the EU was ‘quite crazy’ when it announced the General Data Protection Regulation (GDPR), which has regulated the use of people’s personal data on an EU-wide and cross-industry basis since 2018. ‘And now, [the GDPR] model has been applied by many other jurisdictions’, she says.

Legislative examples

Verónica Volman, a lawyer at Argentinian firm RCTZZ, has written about how the DSA should be seen as a template for the regulation of social media platforms by other jurisdictions. Core to its appeal is that it ‘doesn’t define what “illicit content” is’, she says. It leaves that to each EU Member State, while including broad obligations ‘for the largest online platforms – not to target advertising to minors, for example’.

This is key, says Volman, because frameworks such as the DSA shouldn’t intrude upon regulations enacted by other competent authorities by defining such concepts, given the importance of protecting freedom of speech according to each state’s standards. The DSA also doesn’t oblige the largest platforms to constantly monitor everything, but rather to ‘take action when they know about content like hate speech’.

Both the EU and the UK are ‘taking a view on the idea of what you might call “conditional immunity”’, says Julian Hamblin, Senior Vice-Chair of the IBA Technology Law Committee and a partner at UK-based law firm Trethowans. Via these new regulatory frameworks, ‘they’re seeking to drive good behaviour forwards’, he explains. ‘They’re saying to these companies, “you have to self-regulate, but we’re going to set out the parameters in which you ought to do so”.’

The UK and EU are saying to these companies, ‘you have to self-regulate, but we’re going to set out the parameters in which you ought to do so’

Julian Hamblin
Senior Vice-Chair, IBA Technology Law Committee

Hamblin also sees the UK’s introduction of its Online Safety Act in 2023 as the action of a government and country, ‘which, historically, has always been seen as a bastion of free speech’, and notes that ‘it perhaps can’t continue to take such a libertarian approach’ to the content widely disseminated by the streaming platforms. Their combination of influence and ubiquity presents, he says, a vastly different landscape to the ‘plurality of media and editors’ of traditional outlets.

‘Can you continue a line of minimal interference because the right of freedom of speech is sacrosanct?’ asks Hamblin rhetorically. In this regard, the UK’s Online Safety Act has sought to achieve a delicate balance. It doesn’t stipulate what measures social media platforms must put in place. Instead, the Act’s regulator, Ofcom, will be giving guidance and issuing codes of practice with clear recommendations and identifying proportionate steps to manage the risks.

But the question of whether this is enough was brought into stark relief when the Mayor of London, Sadiq Khan, urged the government to revisit the Act, which has a phased entry into force, describing it as ‘not fit for purpose’. He was speaking out after misinformation about the suspect involved in killings in the English town of Southport spread across social media, significantly contributing to disorder in the UK over the summer. ‘The way the algorithms work’, he said, is ‘a cause to be concerned, we’ve seen a direct consequence of this’.

A spokesperson for Ofcom didn’t respond directly to the Mayor’s comments when contacted by Global Insight, but said that any changes to the legislation would be a matter for government and Parliament. ‘Before we can enforce providers’ safety duties under the Online Safety Act, we’re legally required to consult publicly on codes of practice and guidance’, the spokesperson said. ‘This allows [us] to take evidence from expert parties and ensure our guidance stands up to a legal challenge.’

How to protect against online harms

Over the past few years, various jurisdictions have either introduced regulations specifically addressing online harms or have pursued the owners of the largest platforms via other legislation or through their court systems. These notable examples are from the past 12 months.

Brazil: In August, Brazil’s Supreme Court suspended X countrywide after the platform’s refusal to remove accounts alleged to have spread disinformation about the 2022 Brazilian presidential election. X was also required to appoint a legal representative in Brazil. The platform has since complied with the Court’s orders and was allowed to resume service in Brazil – a key market for X – in October.

The EU: In February, the Digital Services Act (DSA) was implemented in the bloc. A pan-European regulatory framework, its compliance requirements for social media companies are particularly stringent for ‘very large online platforms’ – those with over 45 million users per month, such as Facebook, TikTok and X. Its goal – to be enforced by a Digital Services Coordinator per EU Member State – is to prevent illegal and harmful online activities and the spread of disinformation. Some Member States have introduced additional measures. In October, Ireland published its finalised Online Safety Code, which consists of legally binding rules for video-sharing platforms to follow in order to reduce harms for users.

France: Telegram founder Pavel Durov has been charged with allowing criminal activity on the app. Durov denies wrongdoing, while Telegram states that its content moderation ‘is within industry standards and constantly improving’. In total, 31 countries have either permanently or temporarily banned the Telegram platform since 2015 – affecting over three billion people globally – according to the cybersecurity service Surfshark and the digital rights watchdog NetBlocks.

The UK: The UK’s Online Safety Act was passed in autumn 2023, with online safety regulator Ofcom expecting the first duties to come into force around the end of 2024. The Act places a range of new duties on social media companies, particularly in terms of preventing children from accessing harmful content. From December, Ofcom will publish its first ‘illegal harms’ code. Companies will have three months to comply before facing fines. This follows a year in which tech platforms were used to coordinate anti-immigrant riots in the UK.

The US: In April, President Joe Biden signed a bill requiring Chinese-headquartered company ByteDance to sell TikTok within a year or face the platform being banned in the US. The bill was passed amid security and data concerns, specifically about potential access by the Chinese government to the information of TikTok users. TikTok denies it passes foreign user data to the Chinese government, and the platform is now attempting to block the law in court.

‘Parliament was clear that we should prioritise our codes on illegal harms and children’s safety, setting a deadline of April 2025’, added the spokesperson. ‘This is a challenging timescale, but we remain firmly on track. We expect the first set of duties – regarding illegal content – will start to come into force from around the end of this year.’

Flannery finds the issues raised by the Mayor of London’s intervention interesting. Australia’s Online Safety Act, which predates the UK’s, ‘is a blueprint for [the UK’s] – or we like to think it is’, she says. Echoing one of Ofcom’s priorities, ‘ours really grew out of protecting children’. This changed, she says, after the live streaming of the mosque killings in Christchurch, New Zealand, in 2019. ‘It moved into looking at terrorism and the concept of violent, abhorrent behaviour’, Flannery explains.

But she believes that requiring a single online safety act to address an ever-multiplying range of issues risks muddying the clear waters of its origins in addressing ‘the heinousness of child pornography. We need to completely divorce child abuse from any discussion of “free speech”’, she says. In contrast, the views shared online during the UK riots, however misinformed, ‘did have something to do with free speech’, says Flannery. How to define ‘political discussion’ needs a ‘different level of nuance’, she believes – one size doesn’t fit all.

The pursuit of ‘good behaviour’

In the EU, the European Commission has surprised many pundits by how aggressively it has deployed the regulatory compliance requirements of the DMA and DSA to investigate the practices of its specified ‘gatekeeper’ services and – accompanied by the threat of heavy financial penalties – demand changes. In contrast, in South America – while Brazil is following the EU’s lead – Argentina’s government is attempting to stimulate economic growth by deregulating all markets, including the technology sector – even creating a Ministry of Deregulation and Transformation of the State.

From an international and geopolitical perspective, the way in which regulatory scrutiny and enforcement of Big Tech differs on a country-by-country basis poses a challenge to achieving the kind of globally united front that some commentators believe is the only way to generate genuine change when it comes to online safety. In the US, for example, section 230 of the Communications Decency Act codifies a broad principle that online services aren’t considered the speakers or publishers of material posted by their users.

...

Nevertheless, Hamblin points to the Global Online Safety Regulators Network, which includes agencies such as Ofcom, Australia’s eSafety Commissioner, ARCOM in France and the Korea Communications Standards Commission. Its stated aim is to ‘enhance coherence and consistency’ in online safety regulation. Speaking for the Network, Ofcom’s spokesperson confirmed to Global Insight the absence of the US, ‘as there is currently no online safety regulator’. However, Ofcom ‘does engage bilaterally with US regulators on a range of matters’.

What we are ‘talking about are moderate legislative frameworks for control’, says Hamblin of the aims of the Network’s member regulators. ‘That group will try to create norms and standards internationally’ for the largest social media platforms, he says. ‘And if those constructs are reasonable and met with approval, not just in your country but in others, then you start to create an expectation of good behaviour. I think we’re starting to see that evolve.’

At the same time, increasing numbers of regulators and authorities are toughening their pursuit of this ‘good behaviour’. In August, Brazil’s Supreme Court suspended X – formerly Twitter – countrywide, after the platform’s refusal to remove accounts alleged to have spread disinformation and to appoint a legal representative in the country, as Brazilian law requires. X has since complied with the Court’s orders and paid a fine. It resumed service in Brazil in October.

In August, Telegram founder Pavel Durov – who holds French citizenship – was arrested by the French authorities and charged on multiple counts for allegedly allowing criminal activity on the popular messaging app. Telegram issued a post shortly afterwards, denying that Durov had committed wrongdoing and stating that ‘it is absurd to claim that a platform or its owner are responsible for abuse of that platform’ and that the app’s ‘moderation is within industry standards and constantly improving’. Later in September Durov said that Telegram would now, among other things, be more proactive in responding to requests by authorities.

A recent battle between X Corp and Australia’s eSafety Commissioner followed the latter’s request that the platform globally remove a video of an attack on a Sydney bishop because it depicted ‘gratuitous or offensive violence’. X geo-blocked Australian users specifically in order that they couldn’t access the footage but refused to do so elsewhere. It argued that this would set a precedent of global censorship. An Australian federal court ruled in June that the Commissioner’s demand went beyond ‘reasonable’ and was contrary to the principle of ‘comity of nations’.

However, Flannery doesn’t believe this story has necessarily run its course. ‘It’s interesting that, around the same time the Court closed down the Commissioner, the Australian government announced a review of the Online Safety Act’, she says. ‘It’s possible they’ll say: “We should have global blocking”, because the way the Act is drafted now, it’s only blocking content for Australians.’ But she adds that, given the Court’s ruling that ‘you can’t interfere with the laws of other jurisdictions’, the authorities are presented with ‘a conundrum’.

The Commissioner’s scrutiny of Big Tech looks set to continue. In July the Commissioner issued legal notices to a raft of tech companies and platforms under Australia’s Online Safety Act, requiring them to report to the regulator every six months about measures they’ve in place to tackle online child sexual abuse, including deepfake material created using generative artificial intelligence (AI). This is the first time that such notices have required tech companies to report periodically for the next two years, with the eSafety Commissioner to regularly summarise its findings.

AI changes the conversation

Looking to the future, the advent of AI – from its misuse to create believable fake media, to the scope of its role in content moderation – appears set to transform the discussion around social media harms and protections anyway. This new era will test the effectiveness of regulatory ‘futureproofing’, the interpretive gap in frameworks allowing for technological development that commentators see as key.

‘You’ve got some segue into that, in the EU, via the AI Act’, says Hamblin. ‘We’re a bit behind the curve in the UK.’ However, he says, the UK’s Ofcom has ‘expressly acknowledged that AI is going to change the picture faster than legislation can be drafted, and that it’ll need to have flexibility within secondary legislation or in its regulatory powers to deal with it’. A spokesperson for Ofcom confirmed to Global Insight that the Global Online Regulators Network has ‘held conversations on AI and its implications on online safety’.

In this regard, Flannery believes the EU’s DSA is still too prescriptive. She compares it with Australia’s Online Safety Act and its ‘Basic Online Safety Expectations’. She says that ‘if one of the things a platform must do is protect children from content that isn’t age appropriate, that principle applies whether or not it’s considered “real” – in terms of being footage of terrorist or violent activity – or if it’s generated by AI’.

If one of the things a platform must do is protect children from content that isn’t age appropriate, that principle applies whether or not it’s considered ‘real’ or generated by AI

Angela Flannery
Vice-Chair, IBA Communications Law Committee

However, she also sees AI’s positive potential. ‘One of AI’s great uses could be to properly deal with child pornography on online platforms’, she says. ‘You wouldn’t have to worry anymore about that, because you could just regulate it in a way that isn’t a “whack-a-mole” situation with every new development.’ She says that this could free up regulators to ‘deal with other issues, like political or free speech or the problem of dis- and misinformation’ in online posts.

Haangala says that the increasing reliance by the biggest players in the technology industry on using AI for content moderation – to decide not only what gets taken down but what’s posted in the first place – highlights ‘the need for collaboration between Big Tech and regulators to address algorithmic biases and transparency issues’. To tackle ethical concerns, ‘regulators need to work closely with tech companies to create fair and transparent systems’.

For Flannery, a fundamental question is: ‘are social media platforms actually fit for purpose?’ She wonders if we need to better interrogate the customary defence by Big Tech that they can’t control everything that users post on their platforms. ‘Should we be taking a step back and saying “well, if that’s the case, does that mean the platforms need a redesign?”’ In the automobile sector, for example, she says the manufacturer of a car that was impossible to lock would never say, ‘oh, that’s just how it’s designed. We can’t do anything about it’.

Creating long-lasting protections against social media harm may require a fundamental rethink on many fronts. For Haangala, this includes looking at the current paradigm of the regulatory–Big Tech relationship. ‘Are we upskilling policymakers enough to even understand the issues they’re attempting to tackle?’ she asks. She would like to see regulators work holistically with Big Tech. ‘If you don’t learn, you don’t evolve – no matter how big the penalty stick.’

In an ideal world, better trained and educated policymakers would sit in a lab with the developers of the next innovations. She would like to see governments ‘put the resources in place whereby those people are learning about the technology at source’, she says. ‘Because once you understand something, you may not be able to predict what comes next, per se, but you are able to see what its evolution trail could look like.’

Ultimately, mitigating against online harms will require a forward-looking, flexible and continued resolve by regulators and states to develop legislation aimed at more collaborative oversight of the owners of the digital platforms that shape our daily lives in ever-evolving ways.

Tom Wicker is a freelance journalist and can be contacted at tomw@tomwicker.org.

RealPeopleStudio/AdobeStock.com