Skip to content Skip to footer

Berlin 2025: Social Media Tribunal

1        Introduction

The Social Media Tribunal is an initiative of The Court of the Citizens of the World (CCW), and was organized in Berlin from 16-21 March 2025. The Court is a private initiative, with no formal legal effect, whose objective is “to prosecute world leaders and regimes that commit human rights violations: Crime of aggression and Crimes against humanity.” Previous cases were tribunals on Putin and China. Obviously, Social Media is not a world leader as such, but the analogy between a powerful country and social media is often made. For instance, in a Dutch piece I argued that like governments big tech controls and influences people, and they have power and money. One point they differ in is that human rights as such do not apply to them. However, besides several scholars also EU law recently explicitly applies human rights to companies, like in Directive 2024/1760 on corporate sustainability due diligence:  “The new rules will ensure that companies in scope identify and address adverse human rights and environmental impacts of their actions inside and outside Europe.”

The tribunal consisted of three judges, and there were two public prosecutors and two defense lawyers. The one being prosecuted and defended, social media, was not present. The prosecutors of the social media tribunal brought together a wide range of experts or people who had relevant personal experiences. That is how I ended up there, I was asked to deliver an expert testimony on privacy and data protection, although in the end a large part of it was on moderation. The defense lawyers, who were often arguing the procedure was unfair, only could cross-examine.

The exact charge social media were facing was not clear. Neither was clear which social media were prosecuted. Clearly, the focus was not on the positive aspects of social media, such as connecting people and sharing information, but rather on the negative consequences, particularly how our social interactions are increasingly being exploited by algorithms and lead to harm. The latter is actually also connected to something people like on the internet: free services.

2        The Harm Caused by Social Media

Various testimonies highlighted the damaging effects of social media. For instance, on TikTok, the algorithm quickly starts pushing harmful content to new users. Within just 2 minutes of creating a new account as a 13-year-old girl, the platform begins serving content related to self-harm. Similarly, a 13-year-old boy account is shown content featuring Andrew Tate within 2 minutes. Moreover, if the girl’s profile includes that she is “Susan” and believes she is fat, the account receives twelve times more self-harm-related content compared to when the profile simply states her name as Susan, without mentioning any body image concerns. Testimonies also concerned well known examples as the Cambridge Analytica scandal and the genocide of Rohingyas. In some cases, the harm results not just from the algorithms themselves, but also from human failures, such as ineffective human intervention. For instance, a witness from Sweden and a journalist from Brazil described how moderators failed them.

3        The main issues

The main issues identified in the hearings were roughly the following.

  • Lack of Moderation

Platforms are often not obligated to moderate content. Their argument that moderation means censorship is flawed, because censorship should not be confused with acting responsibly. In fact, as one of the experts testified, 97% of moderation by platforms is handled by AI even before the content is posted, which at its core is an act of censorship.

  • Failure to Respond to Notifications

Many platforms neglect or delay action even when users report clearly harmful or illegal content.

  • Failure to Adjust or Correct Algorithms

Platforms are either unwilling or unable to correct problematic algorithmic behaviour. For example, Google could not correct its autosuggest for “Women are…” and “Jews are…”. As a consequence they decided back in 2016 to no longer apply autosuggest to women as well as jews, Americans, Mexicans, Germans, etc. Dutch are still autocompleted, leading to Dutch are …. unfriendly, not friendly (and: the tallest in the world). I regularly presented on this topic since 2017, see tweet Andres.

It was special to meet Carole Cadwalladr,  the author of the Guardian Article I always quoted in this context

4        Some legal problems

4.1      There is too much law

One of the aims of the Tribunal was to consider regulation. I do not deny the need for regulation, but in the European Union there seems to be an abundance with i.a. the Data Governance Act, AI act, GDPR, DMA, DSA, etc. And in the context of moderation there is also, e.g. Art. 17 of the DSM Directive (Directive 2019/790/EC), Regulation (EU) 2021/784 on addressing the dissemination of terrorist content online, and Regulation (EU) 2024/900 on the transparency and targeting of political advertising.

It is impossible to understand all these, and their interconnections. I doubt whether yet more rules will help. It is somehow related to a topic I wrote about regularly, informing consumers online. What is needed there, is to think what information is really crucial for consumers and restrict to that instead of overload them. The same mechanism might work for the present regulatory landscape. We should take a deep breath, think about the core issues (in terms of Social Media, for instance, the business model), focus on that and leave out less relevant aspects.

4.2      Existing law not always works

4.2.1    GDPR

Back in 2020 I conducted two studies on data brokers and apps for the European Data Protection Board (EDPB). Not surprisingly, these studies clearly demonstrated widespread non-compliance. However, even if companies would genuinely want to comply, in today’s data ecosystem this is nearly impossible. Individuals generate thousands, or even tens of thousands, of data points (Lodder 2021):

“Should data controllers (and processors) mention all data that is being processed, indicate for all data the purpose for which these data are processed, the legal basis for the processing, the period for which the data are retained, the parties that receive the data? And if so, how to describe this information referred to in Articles 13 and 14 GDPR “in a concise, transparent, intelligible and easily accessible form, using clear and plain language”?

The present day data protection rules date back to the 1970s. The normative core of the GDPR, Article 5, states six principles from that time. Back then, databases typically included just a name, address, and phone number, information that was relatively static and easy to manage. Against that background data protection law could work. But today, every click, scroll, and interaction is tracked. The legal framework has not evolved to match this complexity. One could also argue that not the law, but the abundant data processing practices should stop or be limited.

4.2.2    DSA

In 2021, Professor Zeleznikow and I proposed at a conference in Porto that social media platforms should be required to provide explicit reasons when they take down content or ban users, such as in high-profile cases like Donald Trump’s account suspension. Shortly afterward, the Digital Services Act (DSA) introduced the requirement for a “statement of reasons” whenever content or accounts are removed. Additionally, the DSA established the Transparency Database (TDB), a publicly accessible system meant to log these takedown actions.

The TDB receives approximately 50 million entries per day. The volume, however, is not the primary issue, it is the quality of the information. This follows from work by Eline Leijten, who analyzed during one week in February 2025 data from 8 social media that qualify as Very Large Online Platforms (VLOPs), sampling 1,000 Statements of Reason per day. Her work highlights a core problem: the lack of meaningful, granular data. As Leijten argues, recital 66 of the DSA states that the legislative rationale behind the TDB is “to ensure transparency and to enable scrutiny over the content moderation decisions of the providers of online platforms and monitoring the spread of illegal content online.”, but her empirical analysis found that the TDB often fails to deliver on this promise.

5        Way forward: towards solutions

In an ideal, perfect world we do not need law, but if things are not going the way we want, regulation can be used to correct this. In doing so we should keep in the back of our minds that companies have a freedom of enterprise, which is also a fundamental right. For me, three lines of actions stand out:

  • Modify the business model. Lawrence Lessig has indicated that the DSA has the wrong focus, it should have focused on the business model of social media. He believes it is too late for change. But maybe it is not.
  • Meaningful Statement of Reasons: whenever content or accounts are taken down, social media should state specific and meaningful reasons. They must possess this information, because it is for a reason they take the information down. This information should be communicated to the user and entered into the TDB
  • Require adequate complaint procedures: for VLOPs, if content or an account is taken down and a complaint is filed, social media should act promptly, not within 24 hours, but within max. an hour (like with terrorist content). In the end most cases will be trivial, either in favor or against the account concerned.

Then some other points for further consideration:

3) What about Addiction?
Prohibition of social media use is often addressed in the context of protecting children: Australia, for instance, has banned social media access for minors. But what about adults? Should social media may be prohibited altogether? Should we ban algorithmic profiling? It is a difficult question, because while algorithms can certainly cause harm, such as through addictive design or manipulative content, they also bring real benefits. This complexity becomes clear in areas like mass claims for data protection violations. In the Netherlands, for example, class actions operate on an opt-out basis, meaning everyone is represented. Yet not everyone is harmed, some people may even benefit from the data-driven services.

4) Enforcement
Enforcement mechanisms remain a major weak point. Social media platforms that fail to act on user notifications should be held accountable. A possible solution could mirror the DMCA system in the US: content is taken down immediately, but at the financial responsibility of the person or entity making the takedown request. This model incentivizes responsible reporting while ensuring swift action. It also creates a clearer process for recourse and accountability.

5) Data Trust?
The current “notice and choice” model, which relies on individuals reading and understanding lengthy privacy policies, is clearly inadequate. Neil Richards and Woody Hartzog propose a model of data trust, and argue that to build healthy information relationships, the companies should regain our trust. Users need assurance that companies do not place their own profits above user well-being, and that they do not exploit or betray the trust placed in them.

6) With Power Comes Responsibility
Social media platforms and tech companies wield extraordinary power. With that power must come responsibility, backed by transparency and accountability. Platforms must not only be clear about how their systems operate, but also be accountable for the outcomes, intended or not, of their designs and decisions.

6     The verdict: social media platforms complicit in deaths of users and children, interference of elections and facilitating the incitement of genocide

The verdict is published and contains many observations of violations of human rights and includes 17 recommendations.

The Tribunal has set a one-year deadline for social media platforms and governments to implement sweeping reforms. If significant changes are not made by March 21, 2026, the Court will reconvene to explore criminal liability for these platforms and regulatory inaction.

It is highly unlikely the suggested changes are made, so I expect the Court reconvenes next year.

7        References relevant for the testimony

Selection, for overview see Arno R. Lodder – Vrije Universiteit Amsterdam

7.1      On Regulation and Algorithms

Lodder, A. R., & Cooper, Z. (2023). Do algorithms need to be regulated, and if so, what algorithms? In A. Savin, & J. Trzaskowski (Eds.), Research Handbook on EU Internet Law (2nd ed., pp. 80-93). (Research Handbooks in European Law series). Edward Elgar Publishing

Cooper, Z., & Lodder, A. R. (2023). What’s law got to do with IT: an analysis of techno-regulatory incoherence. In B. Brożek, O. Kanevskaia, & P. Pałka (Eds.), Research Handbook on Law and Technology (pp. 45-58). Edward Elgar Publishing

7.2      Privacy and Data Protection

Loui, R. P., Lodder, A. R., & Quick, S. A. (2020). Algorithmic Stages in Privacy of Data Analytics: Process and Probabilities. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 654-664). Cambridge University Press

de Hingh, A., & Lodder, A. R. (2020). The role of human dignity in processing (health) data building on the organ trade prohibition. In T. Synodinou, P. Jougleux, C. Markou, & T. Prastitou (Eds.), EU Internet Law in the Digital Era: Regulation and Enforcement (pp. 261-275). Springer

Lodder, A. R., & Loui, R. P. (2018). Data Algorithms and Privacy in Surveillance: On Stages, Numbers and the Human Factor. In W. Barfield, & U. Pagello (Eds.), Research handbook on the law of artificial intelligence (pp. 275-284). Edward Elgar

Over the last ten years I successfully supervised 10 Ph.Ds, half of them on privacy and/or data protection, viz.:

12. 2024 – Jeffrey Bholasing (&Wisman), Privacy in the era of data, defense VU 4 October

11. 2024 – Maud van Erp (& Van Schoonhoven), Een privacyrechtelijk perspectief op de (zieke) MBO-Student, defense VU 8 April

9. 2021 – Peter Olsthoorn (&Van der Linden/Zwenne), Baas over eigen data. Zelfbeschikking in bescherming van persoonsgegevens,defense VU 23 September

7. 2019 – Tijmen Wisman (&Murray/Van der Linden), The Quest for the Effective Protection of the Right to Privacy: On the Policy and Rulemaking on Mandatory Internet of Things systems in the European Union, defense VU 23 January

3. 2016 – Rob van den Hoven van Genderen (&Oskamp), Privacy Limitation Clauses: Trojan Horses under the Disguise of Democracy. On the Reduction of Privacy by National Authorities in Cases of National Security and Justice Matters, defense VU 10 June

7.3      On ISP liability

Lodder, A. R. (2022, 2017, 2002). Chapter 4 – Directive 2000/31/EC on certain legal aspects of information society services, A.R. Lodder & H.W. K. Kaspersen (Eds.), eDirectives: Guide to European Union Law on E-Commerce. Commentary on the directives on distance selling, electronic signatures, electronic commerce, copyright in the information society and data protection, Kluwer Law International. See also updates 2017 and 2022 in A.R. Lodder & A.D. Murray, EU Regulation of E-Commerce: A Commentary, Edward Elgar.

Lodder, A. R., & Wisman, T. (2022). Computer says no to my upload? Article 17 on filtering and the GDPR prohibition of automated decision-making. In M. Borghi, & R. Brownsword (Eds.), Law, Regulation and Governance in the Information Society: Informational Rights and Informational Wrongs (pp. 87-101). Routledge

Lodder, A. R., & Polter, P. (2017). ISP blocking and filtering: on the shallow justification in case law regarding effectiveness of measures. European Journal of Law and Technology8(2), 1-20

Lodder, Arno R. and Sandvliet, Kyra (2015), Measures of Blocking, Filtering and Take-Down of Illegal Internet Content: The Netherlands – Report for the Council of Europe (November 16, 2015). LEGAL OPINION ON BLOCKING, FILTERING AND TAKE-DOWN OF ILLEGAL INTERNET CONTENT, Lausanne.

Lodder, A. R., & van der Meulen, N. S. (2013). Evaluation of the role of access providers. Discussion of Dutch Pirate Bay case law and introducing principles on directness, effectiveness, costs, relevance, and time. Journal of Intellectual Property, Information Technology and E-Commerce Law4(2), 130-141

Amsterdam Law & Technology Institute
VU Faculty of Law
De Boelelaan 1077, 1081 HV Amsterdam