Overview of the accepted abstracts (with links to the actual abstract)
- Alessa, Hibah (University of Leeds) and Basu, Subhajit (University of Leeds). Technology and Procedure in Dispute Resolution: A Procedural Model of Reform for Saudi Arabia’s Commercial Courts or Top-Down Transformation?
- Aridor Hershkovitz, Rachel (Israel Democracy Institute) and Shwartz Altshuler, Tehilla (Israel Democracy Institute). Cybersecurity Regulations – A Comparative Study
- Ashok, Pratiksha (UC Louvain). A Tryst with Digital Destiny – Comparative Analysis on the Regulation of Large Platforms between the European Digital Markets Act and the Indian Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules
- Barker, Kim (Open University Law School/ObserVAW). Online Violence Against (Women) Gamers: A Contemporary Reflection on Regulatroy Failures?
- Barrio, Fernando (Queen Mary University of London). Climate Change Implications of Unregulated Technological Energy-Efficiency
- Barrio, Fernando (Queen Mary University of London). Legal, Fair and Valid Assessment in Times of AI-Generated Essays
- Blakely, Megan Rae (Lancaster University). Cyberlaw of Massive Multiplayer Online Games: Copyright and Deauthorization of Dungeons & Dragons
- Brown, Abbe (University of Aberdeen). Can You Really Get Your Act Together?
- Cavaliere, Paolo (University of Edinburgh Law School) and Li, Wenlong (University of Birmingham). Examining the Legitimacy and Lawfulness of the Use of Facial Recognition Technology in Public Peaceful Assemblies: Towards a Reconceptualisation of the Right to Freedom of Assembly in the Digital Era
- Celeste, Eduardo (Dublin City University). The Digital Constitutionalism Teaching Partnership: Connecting Virtual Learning Spaces with an Interdisciplinary Toolkit
- Chomczyk Penedo, Andres (Vrije Universiteit Brussel). The Regulation of Data Spaces under the EU Data Strategy: Towards the ‘Act-ification’ of the 5th European Freedom for Data?
- Clifford, Damian (Australian National University) and Paterson, Jeannie (University of Melbourne). Banning Inaccuracy
- Cooper, Zachary (VU Amsterdam). The Utility of Incoherence: How Legislating the Present Confuses the Future
- Da Rosa Lazarotto, Bárbara (Vrije Universiteit Brussel). The Right to Data Portability: An Holistic Analysis of GDPR, DMA and the Data Act
- De Amstalden, Mariela (University of Birmingham). Future Technologies and the Law: Regulating Cell-Cultivated Foods
- De Conca, Silvia (VU Amsterdam). The Present Looks Nothing like The Jetsons: A Legal Analysis of Deceptive Design Techniques in Smart Speakers
- Degalahal, Shweta Reddy (Tilburg University). Reconsidering Data Protection Framework for Use of Publicly Available Personal Data [8840]
- Diker Vanberg, Aysem (Goldsmiths, University of London). Application of EU Competition Law to Artificial Intelligence and Chatbots: Is the Current Competition Regime Fit for Purpose?
- Dinev, Plamen (Lecturer, Goldsmiths, University of London). Consumer 3D Printing and Intellectual Property Law: Assessing the Impact of Decentralised Manufacturing
- Esposito, Maria Samantha (Politecnico di Torino). Regulatory Perspectives for Health Data Processing: Opportunities and Challenges
- Faturoti, Bukola (University of Hertfordshire) and Osikalu, Ayomide (Ayomide Osikalu & Co, Lagos, Nigeria). When Bitterness Mixes with Romance: The Weaponisation of Pornography in Africa
- Flaherty, Ruth (University of Suffolk). ChatGPT: Can a Chatbot be Creative?
- Fras, Kat (Vrije Universiteit). Article 22 of the GDPR: In Force Yet Redundant? The Relevance of Article 22 in the Context of Tax Administrations and the Automated Decision Making
- Fteiha, Bashar (University of Groningen). The Regulation of Cybersecurity of Autonomous Vehicles from a Law and Economics Perspective
- Gordon, Faith (Australian National University). Rights of Children in the Criminal Justice System in the Digital Age: Insights for Legal and Judicial Education and Training
- Griffin, James (University of Exeter). The Challenge of Quantum Computing and Copyright Law: Not What You Would Expect
- Guan, Taorui (The University of Hong Kong). Intellectual Property Legislation Holism in China
- Guillén, Andrea (Institute of Law and Technology, Faculty of Law, Autonomous University of Barcelona). Automated Decision-Making under the GDPR: Towards the Collective Dimension of Data Protection
- Gulczynska, Zuzanna (Ghent University). Processing of Personal Data by International Organizations and the Governance of Privacy in the Digital Age
- Gupta, Indranath (O.P. Jindal Global University, India) and Naithani, Paarth (O.P. Jindal Global University, India). Recent Trends in Data Protection Legislation in India: Mapping the Divergences with a Possible Way Forward
- Harbinja, Edina (Aston University). Regulatory Divergence: The Effects of UK Technology Law Reforms on Data Protection and International Data Transfers
- Harbinja, Edina (Aston University); Edwards, Lilian (Newcastle University) and McVey, Marisa (Queen’s University Belfast). Post-Mortem Privacy and Digital Legacy – A Qualitative Empirical Enquiry
- Hariharan, Jeevan (Queen Mary University of London) and Noorda, Hadassa (University of Amsterdam). Imprisoned at Work: The Impact of Employee Monitoring on Physical Privacy and Individual Liberty
- Higson-Bliss, Laura (Keele University). ‘Will Someone not Think of the Children?’ The Protectionist State and Regulating the ‘Harms’ of the Online World for Young People
- Hoekstra, Johanna (University of Edinburgh). Online Dispute Resolution and Access to Justice for Business & Human Rights Issues
- Hof, Jessica (University of Groningen) and Oden, Petra (Hanze University of Applied Sciences Groningen). Breaches of Data Protection by Design in the Dutch Healthcare Sector: Does Enforcement Improve eHealth?
- Holmes, Allison (University of Kent). Becoming ‘Known’: Digital Data Extraction in the Investigation of Offences and its Impact on Victims
- Jondet, Nicolas (Edinburgh Law School). The Proposed Broadening of the UK’s Copyright Exception for Text and Data Mining: A Predictable, Promising and Pacesetting Endeavour
- Joshi, Divij (University College London). Abstract – Governing ‘Public’ Digital Infrastructures
- Kalsi, Monique (University of Groningen). Understanding the Scope of Data Controllers’ Responsibility to Implement Data Protection by Design and by Default Obligations
- Kamara, Irene (Tilburg Institute for Law, Technology, and Society). The Jigsaw Puzzle of the EU Cybersecurity Law: Critical Reflections Following the Reform of the Network and Information Security Directive and the Proposed Cyber Resilience Act
- Keese, Nina (European Parliament) and Leiser, Mark (Vrije Universiteit Amsterdam). Freedom of Thought in the Digital Age: Online Manipulation and Article 9 ECHR
- Kilkenny, Cormac (Dublin City University). Remediating Rug-pulls: Examining Private Law’s Response to Crypto Asset Fraud
- Krokida, Zoi (University of Stirling). The EU Right of Communication to the Public against Creativity in the Digital World: A Conflict at the Crossroads?
- Lazcano, Israel Cedillo (Universidad de las Américas Puebla (UDLAP)). DevOps and the Regulation of the “Invisible Mind” of the Digital Commercial Society
- Leiser, Mark (Vrije Universiteit Amsterdam); Santos, Cristiana (Utrecht University) and Doshi, Kosha (Symbiosis Law School). Regulating Dark Patterns across the Spectrum of Visibility
- Li, Wenlong (University of Birmingham) and Chen, Jiahong (University of Sheffield). Understanding the Evolution of China’s Personal Information Protection Law: The Theory of Gravity Assist
- Maguire, Rachel (Royal Holloway, University of London). Copyright and Online Creativity: Web3 to the Rescue?
- Mangan, David (Maynooth University). From the Workplace to the Workforce: Monitoring Workers in the EU
- Manwaring, Kayleen (UNSW). Repairing and Sustaining the Third Wave of Computing
- Mapp, Maureen (University of Birmingham). Private Crypto Asset Regulation in Africa – A Kaleidoscope of Legislative and Policy Problems
- Margoni, Thomas (CiTiP); Quintais, Joao (University of Amsterdam) and Schwemer, Sebastian (Centre for Information and Innovation Law (CIIR), University of Copenhagen). Algorithmic Propagation: Do Property Rights in Data Increase Bias in Content Moderation?
- Marquez Daly, Anna Helena (University of Groningen). Innovation & Law: Encouraging Lovers or Bitter Nemesis?
- Mathur, Sahil (The Open University). Digital Inequalities and Risks – Perspectives from FinTech
- McCullagh, Karen (University of East Anglia). Brexit UK Data Protection – Maintaining Alignment with or Diverging from the EU Standard?
- Mendis, Sunimal (Tilburg Institute for Law, Technology, and Society (TILT), Tilburg University, The Netherlands). Fostering Democratic Discourse in the (Digital) Public Sphere: Proposing a Paradigm Shift in EU Online Copyright Enforcement
- Milkaite, Ingrida (Ghent University). A Children’s Rights Perspective on Privacy and Data Protection in Europe
- Neroni Rezende, Isadora (University of Bologna). The Proposed Regulation to Fight Online Child Sexual Abuse: An Appraisal of Privacy, Data Protection and Criminal Procedural Issues
- Nottingham, Emma (University of Winchester) and Stockman, Caroline (University of Winchester). Dark Patterns of Cuteness in Children’s Digital Education
- Orlu, Cyriacus (PhD Candidate, Faculty of Law, Niger Delta University) and Eboibi, Felix (Faculty of Law, Niger Delta University). The Dichotomy of Registration and Operation of Cybercafes under the Nigerian Cybercrime Legal Frameworks
- O’Sullivan, Kevin (Dublin City University). The Court of Justice Ruling in Poland and Our Filtered Futures: A Disruptive or Diminished Role for Internet User Fundamental Rights?
- Oswald, Marion (Northumbria University); Chambers, Luke (Northumbria University) and Paul, Angela (Northumbria University). The Potential of a Framework Using the Concept of ‘Intelligence’ to Govern the Use of Machine Learning in Policing
- Paolucci, Frederica (Bocconi University). Digital Constitutionalism to the Test of the Smart Identity
- Paul, Angela (Northumbria University). Police Drones and the Possible Human Rights Issues: A Case Study from England and Wales
- Poyton, David (Aberystwyth University). The ‘Intangibles’: A Veritable Gordian Knot. Are we Slicing through the Challenges? Or Unpicking them Strand-by-Strand?
- Przhedetsky, Linda (University of Technology, Sydney) and Bednarz, Zofia (University of Sydney). Algorithmic Opacity in Consumer Markets: Comparing Regulatory Challenges in Financial Services and Residential Tenancy Sectors
- Quintais, João Pedro (University of Amsterdam, Institute for Information Law) and Kuczerawy, Aleksandra (Centre for IT & IP Law, KU Leuven). “Must-Carry” Obligations for Online Platforms: Between Content Moderation and Freedom of Expression
- Rachovitsa, Mando (University of Groningen). “It’s Not in the Cloud!”: The Data Centre as a Singular Object in Cybersecurity and Critical Infrastructure Regulation
- Ramirezmontes, Cesar (Leeds University). EU Trade Marks and Community Designs in the Metaverse
- Rebrean, Maria (Leiden University – eLaw – Center for Law and Digital Technologies). Giving my Data Away: A Study of Consent, Rationality, and End-User Responsabilisation
- Romero Moreno, Felipe (Hertfordshire Law School). Deepfake Technology: Making the EU Artificial Intelligence Act and EU Digital Services Act a Human-Rights Compliant Response
- Rosenberg, Roni (Ono Academic College, Law Faculty). Cyber Harassment, Revenge Porn and Freedom of Speech
- Rosli, Wan Rosalili Binti Wan (School of Law, University of Bradford) and Hamin, Zaiton (Faculty of Law, Universiti Teknologi MARA). The Legal Response to Cyberstalking in Malaysia
- Samek, Martin (Charles University, Faculty of law). New EU Regulation and Consumer Protection: Are National Bodies up to the Task?
- Scharf, Nick (UEA Law School). 3A.M. Eternal? What The KLF Can Teach Us about the Past, Present and Future of Copyright
- Shattock, Ethan (Maynooth University). Knowledge of Deception: Intermediary Liability for Disinformation under Ireland’s Electoral Reform Act
- Siliafis, Konstantinos (Canterbury Christ Church University) and Colegate, Ellie (University of Nottingham). Addressing the Potential Pitfalls of the UK’s Online Safety Bill’s Provisions in Relation to Adults
- Sinclair, Alexandra (LSE). ‘Gaming the Algorithm’ as a Defence to Public Law Transparency Obligations
- Soukupová, Jana (Charles University). Digital Assets, Digital Content, Crypto-Assets, Data and Others: Are We on the Road to a Terminological Confusion?
- Sumer, Bilgesu (KU Leuven). Keeping Track of the Regulation of Biometric Data within the EU Cyberlaw: Legal Overlaps and Data Protection Challenges
- Sümeyra Doğan, Fatma (Jagiellonian University). Re-Use or Secondary Use: A Comparison between Data Governance Act and European Health Data Space
- Sutter, Gavin (Queen Mary University of London). Qui Docet, Discit: Some Reflections on Lessons Learned Across Two Decades of Teaching an Online LLM
- Terzis, Petros (UCL) and Veale, Michael (UCL). Foundations for Regulating Computational Infrastructures
- Tur-Sinai, Ofer (Ono Academic College) and Helman, Lital (Ono Academic College). Bracing Scarcity: Can NFTs Save Digital Art?
- Unver, Mehmet (University of Hertfordshire) and Roddeck, Lezel (Bucerius Law School). Artificial Intelligence in the Legal Sector: Ethics on the Spot
- Urquhart, Lachlan (University of Edinburgh) and Boniface, Christopher (University of Edinburgh). Legal Aspects of the Right to Repair for Consumer Internet of Things
- Van Schendel, Sascha (Tilburg University). The Regulation of AI in Criminal Justice: Building a Bridge between Different Legal Frameworks
- Van ‘t Schip, Mattis (Radboud University). The Regulation of Supply Chain Cybersecurity in the EU NIS2 Directive: A Novel Approach to Cybersecurity for the Internet of Things
- Vellinga, Nynke (University of Groningen). Rethinking Compensation in Light of the Development of AI
- Verdoodt, Valerie (Ghent University) and Lievens, Eva (Ghent University). The EU Approach to Safeguard Children’s Rights on Video-Sharing Platforms: Jigsaw or Maze?
- Wang, Xiaoren (University of Dundee); Heald, Paul (University of Illinois) and Ge, Weihao (University of Illinois). Creatively Misinformed: Mining Social Media to Capture Internet Creators and Users’ Misunderstanding of Intellectual Property Registration System
- Williams, Elin (University of Liverpool, PhD Candidate in Law/Edge Hill University, Visiting Lecturer in Law). Money Laundering Through Cryptocurrency Mixers: Exploiting Existing Weaknesses in the Anti-Money Laundering Regime
- Wolters, Pieter (Radboud University). The Influence of the Data Act on the Shifting Balance between Data Protection and the Free Movement of Data
- Xiao, Leon Y (IT University of Copenhagen; QMUL; York; Stanford). Beneath the Label: Poor Compliance with ESRB, PEGI, and IARC Industry Self-Regulation Requiring Loot Box Presence Warning Labels by Video Game Companies
- Yardimci, Gizem, Aphra Kerr and David Mangan (Maynooth University). Protecting Elections in the Digital Age: Examining the Potential Regulatory Impact of the EU’s Draft AI Act on Political Bots
- Zardiashvili, Lexo (Leiden University). The End of Online Behavioral Advertising
- Zucchetti Filho, Pedro (Australian National University). Facial Recognition in Brazil: Current Scenario and Future Developments
1. Alessa, Hibah (University of Leeds) and Basu, Subhajit (University of Leeds). Technology and Procedure in Dispute Resolution: A Procedural Model of Reform for Saudi Arabia’s Commercial Courts or Top-Down Transformation?
Keywords: Technology, Artificial Intelligence, Innovations, Administration of justice, Saudi Courts
Abstract. The judicial system reaffirms the state’s legitimacy and represents its power to distribute burdens and benefits to citizens. Hence, the system is burdened by the very high expectations placed on it by the state and its citizens. However, it has been argued that courts that comprise the Saudi judicial system continue to lose ground to ADR regimes, and if the fledgling Saudi system is not reformed, it will lose further the confidence of domestic and foreign parties while parallel international tribunals thrive. Parties avoid court-centred justice for several reasons, such as procedural inefficiencies.
Notwithstanding, the amendment of procedural law requires important innovations in court practice, process, and procedure. Along with exploring more routine forms of procedural and digitization of court services, this article gives particular attention to innovations in the use of e-litigation platforms, blockchain technology and, controversially, the use of artificial intelligence models in courts to enhance communication, analyses, and adjudication in the judicial system. AI is used in this study to describe computer systems that make logical deductions normally associated with the human mind and perform tasks that require human intelligence. Despite decades of proposals and legislation related to information technology around the world, many national legal frameworks are being shaped by AI and automated technologies in particular. This further requires the identification of challenges courts in have jurisdictions face and how they have mitigated similar challenges.
Therefore, the intended purpose of this article is to consider the broader direction of travel in the court administration, any lessons that can be learned, and whether a technology-centred agenda offers a better roadmap for reform for Saudi courts. As to its scope, the analysis to document the full spectrum of digital transformations that have emerged across jurisdictions. In addition, key innovations in procedure and technology in legal processes of a more general nature will be explored to how such endeavours raise deeper questions about the nature of justice itself and the capacity of innovative mechanisms to widen access to justice and enhance efficiency and fairness in the administration of justice.
This article will conclude that technology has its numerous benefits. It can be used to widen, reengineer, or even reimagine access to justice, fostering more efficient processes of dispute settlement and offering alternatives to the paper-based systems and lawyering that has come to slow down traditional court functions. On the other hand, however, concerns remain over technologies that rely disproportionately or inequitably on automated judgments or machine learning, sometimes with the implied or express aim of placing such processes out of the realm of legitimate (judicial) review and legal contestation. The enterprise of law involves making choices and balancing between different interests and is, therefore, by its nature a subjective, discretionary, value-laden enterprise. Therefore, this article suggests that special attention should be paid to undertaking comprehensive research to understand the feasibility of an advanced technology and AI-driven system to reform and develop Saudi courts. Then, a regulatory framework based on global best practices should be adopted.
2. Aridor Hershkovitz, Rachel (Israel Democracy Institute) and Shwartz Altshuler, Tehilla (Israel Democracy Institute). Cybersecurity Regulations – A Comparative Study
Keywords: cybersecurity, regulation, market failures, government intervention, cyberattack, critical infrastructure, hard/centralized command-and-control regulation, soft/decentralized command-and-control regulation., collaborative regulation., self-regulation
Abstract. In December 2022 Israel’s State Comptroller published a disturbing report claiming that there’s a systemic problem with Israel’s cyber defense readiness which seems to derive from a lack of incentives and sanctions to promote the creation of defense mechanisms against cyberattacks and digital illiteracy among cyberspace users and policymakers.
The unique features of cyberspace, especially hyperconnectivity and the speed of information transfer, are at the heart of both the great benefits it brings to society and the huge dangers it poses. The tremendous damage liable to be caused by cyberattacks, combined with the absence of adequate incentives for investment in cyber protection, has created a market failure that justifies government intervention in the regulation of cybersecurity.
Government intervention in the regulation of cyber protection faces several challenges, however. Some of these are technological challenges, while others stem from the complexity of the cyber protection world and its cross-sectoral nature.
Another major challenge is that governments play multiple roles regarding cyberspace, wearing different “hats” that sometimes conflict with each other: they own critical infrastructure; are responsible for national security; act as a regulator for private-sector entities that possess cyberinfrastructure and are responsible for protecting it; play an active role in public and private cooperative efforts for cyber protection; act on the international level with and against other countries in an effort to protect cyberspace, whose geographical boundaries are blurred; produce and disseminate information regarding cyber protection; and finally, they can serve as a cyber attacker that poses threats to other states or organizations.
Western countries, including Israel, have been engaged for several years in attempts to regulate cyber protection. What these various attempts have in common is the adoption of a conceptual approach underlying effective regulation of cyber protection: that responsibility is shared by all actors, and that the regulation of cyberspace should not apply only to critical infrastructure or focus solely on the public sector. At the same time, the scope of this responsibility, the type of regulation that is appropriate, and the regulatory tools chosen should be determined based on the anticipated level of risk to the public interest from a successful cyberattack against each actor or sector. This approach is similar to the principle of “common but differentiated responsibilities” that has become standard in international law in the context of environmental protection and mitigation of climate-change harms.
Our study surveys cyber protection policy in several countries: the United States, Australia, UK, the European Union and two of its member states (Denmark and France), and Israel. The different countries employ a variety of regulatory tools to protect cyberspace: hard/centralized command-and-control regulation; soft/decentralized command-and-control regulation; collaborative regulation; and self-regulation. The degree of responsibility of each actor in cyberspace, and consequently the regulatory tool selected to regulate cyber protection, are determined according to an assessment of the risk to important national interests posed by a cyberattack on a particular organization or on organizations in a particular sector. Therefore, the definition of these important national interests is the key to understanding the scope of state intervention in the market in order to protect cyberspace.
3. Ashok, Pratiksha (UC Louvain). A Tryst with Digital Destiny – Comparative Analysis on the Regulation of Large Platforms between the European Digital Markets Act and the Indian Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules
Keywords: Digital Markets Act, Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, Large Platforms, Significant social media intermediaries, European Union, India
Abstract. At the stroke of the century, the world awoke to digital transformation that changed the course of lives. As countries drafts legislations and regulations fit for tomorrow, the legal environment is saturated with digital policies. However, in this sea of legislation, there is a need for a lighthouse, a guiding source to provide clarity on concepts and regulatory impacts while keeping in mind sovereign necessities and national legislations.
As the European Union (EU) implements its Digital Markets Act, 2022 (DMA), with the intent of protecting consumer welfare and restoring a level playing field, the world is introduced to a new era of regulation of the digital economy. The DMA regulates the operations of gatekeepers. Gatekeepers are platforms that significantly impact the internet market, serving as a gateway for businesses and end-users and having a durable position in the market. Though the list of gatekeepers is not released, it is clear from the explanations that they refer to big tech companies of Amazon, Facebook, Google, Apple, and Microsoft.
In a similar timeline, India enacted path-breaking legislation in the form of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (IT Rules). These Rules regulate all intermediaries, including social media, on-demand and significant social media intermediaries. The Rules did not state what significant social media intermediaries were and gave the power to the Central Government to notify the same. On the same day, the Central Government notified that a social media intermediary would be considered a significant social media intermediary if they have fifty lahks (5 million) users in India.
Two crucial differences are observed. Firstly, India’s legislation covers all social media intermediaries, including significant social media intermediaries. In contrast, the DMA applies only to gatekeeper platforms that perform core platform services, which include social media services. Secondly, the DMA defines the gatekeepers based on certain obligations such as the significant impact on the internal market, durable position, number of users, entry barriers, and structural market characteristics. This definition contrasts the IT Rules, as the subsequent notification prescribes a threshold of users. However, there is a point of similarity in the objective of regulation of large platforms. As jurisdictions around the globe pass legislation in relation to regulation of platforms, this paper attempts to analyse one of the aspects of regulation of the digital economy- large platforms.
This research intends to compare the DMA and the IT Rules in-depth and the scope of such legislation. This paper investigates the teleologic reasoning into the phenomenon of large platforms and their regulation in the India and the EU is provided. This research does not attempt to answer whether a specific system is better than the other- but to stimulate academic discussion into different systems and view different regulatory systems under a critical magnifying glass. A comparative methodology is adopted to draw out the differences in the regulation from the perspective of large platforms in the EU and in India.
4. Barker, Kim (Open University Law School/ObserVAW). Online Violence Against (Women) Gamers: A Contemporary Reflection on Regulatroy Failures?
Keywords: online violence against women (OVAW), online games, online harms, harmful communications, social media
Abstract. Online violence against women (OVAW) and online abuse have been acknowledged as obstacles to gender equality in physical and digital spaces. These forms of discrimination and harassment threaten women’s ability to participate fully and freely in digital life, across social media platforms, websites, messaging apps, and – increasingly visibly – online games. These spaces are integral to interaction and yet the scale, of social media abuse in the form of OVAW has raised the question about the appropriate responses – both regulatory and legal.
In growing recognition of the problematic phenomenon that (now) encompasses OVAW, increasing sectors have identified potential responses. From addressing OVAW through so-called ‘harms based models’, to legislation to capture ‘online hate’, or ‘harmful communications’, the dominant narratives have focussed on online safety, rather than in tackling pernicious forms of behaviour through nuanced regulatory responses beyond simply legislative reform.
As discussions surrounding the metaverse, and fediverse begin to dominate tech narratives, existing questions remain about how to regulate interactive online platforms currently, before considering technological developments in virtual and augmented reality contexts. This paper offers a contemporary assessment of the responses to Online Violence Against Women, placing this harmful phenomenon in the context of online games, assessing the scope and scale of the problem, before analysing the responses of leading online games to OVAW. The paper concludes by questioning where games – and women – go from here.
5. Barrio, Fernando (Queen Mary University of London). Climate Change Implications of Unregulated Technological Energy-Efficiency
Keywords: sustainability, energy consumption, technology regulation, vampire energy, climate change
Abstract. The world already feels the reality of climate change, and the UN’s Intergovernmental Panel on Climate Change has clearly stated that situation will worsen if not decisive action is taken immediately. Consequently, policy makers in all levels have made climate change a central issue to tackle through plans and actions, being the use of technology one of the options for both mitigate the warming of the planet and adapt to the realities of a warmer world. However, more attention needs to be paid to the emissions produced by the intensive and extensive use of different technologies, which includes when technological devices are not in use.
Vampire energy, also known as standby power or phantom loads, refers to the energy consumed by electronic devices and appliances when they are turned off or in standby mode. With the increase of multiple devices at personal, household and institutional levels, this type of energy consumption has become a significant contributor to overall energy use and carbon emissions at global scale. To that it is necessary to add that sustainability is not necessarily included as a factor when designing or programming those devices or the systems that run on them, not forgetting whole technological developments that inherently consume vast amounts of energy, like crypto-currency mining.
In relation to vampire energy, there are regions and countries with laws and in place to limit standby power consumption, like the EU Regulation (EC) No 1275/2008 and Regulation (EC) No 801/2013, the Ecodesign Regulation, or in the US, the California Appliance and Equipment Energy Efficiency Standards, but it is important to note that while they set limits to the amount of energy that devices can use while in standby, they multiplication of devices make those limits, low individually, insufficient for the impact that ghost loads are currently having in the planet.
From the software point of view, even the energy performance software and those producing Energy Performance Certificates don’t seem to be tested for energy-efficiency, and there is a silence in the regulatory framework about it. Here different apps are included and the (lack) of legal requirements in relation to sustainability of computer software extends to systems used daily by hundreds of millions of people around the world. Just to give a simple example, many navigation systems used by drivers, present as the preferred route the one that implies a shorter trip in time, even if that trip reduces only one minute a two-hour trip and represents three times the distance, multiplying the petrol use and carbon emissions to save one or few minutes; sustainability is not programmed as a consideration, only time-efficiency is.
The paper first explains the energy uses of different technologies, like devices in standby, software and digital environment, to then analyze the current regulatory framework impacting such use, including recent court decisions imposing liability on non-tech companies for their emissions, to end with proposal for comprehensive regulation on energy use by new technologies aiming at reducing their climate change impact.
6. Barrio, Fernando (Queen Mary University of London). Legal, Fair and Valid Assessment in Times of AI-Generated Essays
Keywords: AI, Assessment methods, Copyright, Data protection, Higher education
Abstract. Back in 1859 Charles Dickens enunciated the famous words “[i]t was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness […] it was the season of Light, it was the season of Darkness…”, and it could be argued that we was writing about today’s situation in relation to the use of certain technologies to support human endeavours, with assessment in higher education not being an exception to it.
The advent of readily available, and mostly free, artificial intelligence tools based on transformer-based language for Natural Language Processing, NPL, that allow students to write university essays in matter of seconds and with no further input, raises a number of legal and pedagogical issues that need to be considered. The issues are multiple and will likely keep expanding, but this paper is centred around copyright, data protection, and the honesty and integrity of the assessment process, with focus on UK law and global pedagogical practices.
Being the threshold for originality relatively low in UK, in most cases, the AI tool would create original works, which, in principle, could attract copyright protection as computer-generated work under the UK Copyright, Designs and Patents Act 1988, CDPA. According to that piece of legislation, the author would be “the person by whom the arrangements necessary for the creation of the work are undertaken”, which is not as clear as it seems. The the instructions for the essay are . given by the teacher, the student simply enters them but gives the “order”, and the content is generated by the AI tool, so the question is who owns the copyright in the resulting essay: the teacher, the student, the AI tool, as in the Chinese case of 2020, the creator of the algorithm, the owner of the tool, or who? If the student is the sole copyright holder, they may be able to submit the essay as their own work, but if it is found that someone else is the copyright holder, the sole act of submitting it can be deemed copyright infringement.
Algorithmic systems can require the input of personal data, which can be stored by the AI tool and or its owner. The current systems make no reference to any of the data protection principles nor seek permission for the processing of personal data. As the AI tools would need at some point to be monetised, would that include the use of the data for tailored advertisement like current social media platforms? If that is the case, the situation raises concerns about the protection of the data, and the legality of its processing.
The use of the mentioned AI also raises pedagogical concerns, being honesty and integrity of the assessment process central to them.
The paper analyses these legal and pedagogical issues, as well as potential ways forward both in law and assessment practice.
7. Blakely, Megan Rae (Lancaster University). Cyberlaw of Massive Multiplayer Online Games: Copyright and Deauthorization of Dungeons & Dragons
Keywords: mmorpg, games, copyright
Abstract. MMOs have sharply risen in popularity and sophistication over the past few decades. Although the population is not as numerous as social media platforms like Facebook, MMOs have more complexity on the platform and heavily engaged, symbiotic user communities and rely on user creative – and sometimes copyrightable – contributions, governed largely by Terms and Conditions (T&Cs). This has required early navigation of the legal issues surround copyright and user rights in virtual spaces and less obviously, cultural impact of the same.
Legally, MMO gaming contractual enforceability is undertested. The T&Cs often require arbitration, which leaves little legal evidence or precedent as to how disputes are resolved. Many game companies are based in the United States, which has less strict consumer protection in relation to contractual protections than in the EU. Cases brought forth regarding the nature of how these intangible assets are provided to the player have been compelling into arbitration in adherence with the T&Cs, leaving the other pertinent issues to confidential proceedings with no legal precedent. MMOs rely heavily on their relationship with the player community, and the players rely on the representations made through community leaders and through the legal instruments.
One of the most established multiplayer games, Dungeons and Dragons (D&D), has recently modified its T&Cs to revoke many of the rights users retained to their creative contributions to the virtual world. Previously, all users were required to comply with and attach an Open Games Licence (OGL) to any community material, requiring the content to be open to any user, amongst other terms. D&D has now ‘deauthorized’ their OGL 1.0, despite it previously being issued as ‘perpetual.’
This action reflects concerns that game developers rely heavily on community contributions for profit whilst retaining all of the rights to the possibly copyrightable material. The outward facing representations often do not align with the text of the T&Cs, but the generally accepted relationship between the developer and users is one of tolerated (and sometimes even encouraged) infringement. However, the recent D&D actions indicate that the developers may indeed intend to enforce rights after all, which indicates a shift in the way user creative contributions may evolve.
8. Brown, Abbe (University of Aberdeen). Can You Really Get Your Act Together?
Keywords: governance, control, data
Abstract. Technology and data can bring opportunities beyond the “cyber”, notably increased societal benefits in relation to environmental sustainability and energy and health.
This breadth brings the prospect of clashes between regimes (public and private) set up to address the wider goals, and those regimes which have a focus on technology and data, as a result of their different priorities and timelines. Examples are a focus on rewarding innovation and controlling information in the short and longer term (such as through intellectual property, freedom of information, trade secrets and data protection), or a more immediate collaborative and sharing approach to ensure that the societal goal is met. The potential for such a clash is there when new regimes are created; regard to it is had rarely. Yet is a combined approach being seen through the European strategy for data?
This research forms part of a wider project analysing stances being taken directly to data and technology by actors (legislators, regulators, funders, negotiators, activists) in different forms of law, governance and regulation (relating to marine biodiversity, environmental data, health research and energy) which have their own focus on wider societal goals. The wider project argues that there is a need for more holistic and less fragmented consideration of varied areas of law and values (such as sharing or control), as a means of achieving wider societal goals at a more structural and substantive level.
This paper will explore ongoing developments at EU level, to evaluate the extent to which a more holistic approach is being taken by the European strategy for data. With goals of data sovereignty and global competitiveness, the strategy includes the Data Governance Regulation, Data Act, Directive on open data and re-use of public sector information. These new approaches to non personal data also sit alongside plans for intersection and and sharing of information through European data spaces – which include health, green deal and energy.
The paper will compare this landscape, and the pathway to it, with emerging findings from elsewhere in the project. The paper will evaluate risks and benefits which can be gained from the European data strategy for wider proposals regarding effective delivery of aligned and parallel regimes, and for the possibility of implementing a hierarchy or overarching common goals.
9. Cavaliere, Paolo (University of Edinburgh Law School) and Li, Wenlong (University of Birmingham). Examining the Legitimacy and Lawfulness of the Use of Facial Recognition Technology in Public Peaceful Assemblies: Towards a Reconceptualisation of the Right to Freedom of Assembly in the Digital Era
Keywords: Freedom of Assembly, Facial Recognition Technologies, Artificial Intelligence Act, European Court of Human Rights, Surveillance, Privacy
Abstract. The growing use of facial recognition technologies (FRTs) in publicly accessible spaces and particularly in the context of assemblies has attracted severe criticism and resistance worldwide. This particularly worrying use of FRTs sits at the intersection of deeper changes concerning both increasing capabilities of surveillance technologies and security concerns at the societal level, leading in turn to the fast uptake of preventive measures against perceived threats to national security (Fura and Klamberg 2012), marking a distinct shift from reactive to proactive policies in this field (De Hert 2005), especially in the wake of the 9/11 attacks and afterwards.
In response to this growing and now long-lasting trend, academic voices have raised a diverse range of concerns for the impact of FRTs over a number of human rights both online and offline. Most commentators have noted how these technologies impinge on the rights to privacy and data protection. As such, FRTs are largely conceptualised as a breach of privacy of privacy first and foremost, with limited focus, in comparison, on the specific link between technologies and other human rights such as the right to freedom of assembly. Similarly, in the case-law of the European Court of Human Rights the impact of FRTs on freedom of assembly has so far struggled to emerge as a discrete issue, oftentimes considered in conjunction with other rights like freedom of religion and expression. On several occasions, the Court has used a particular choice of words declaring a violation of article 11 ‘interpreted in the light of ’ Article 9 or Article 10.
In stark comparison to the lack of academic conversation, civil society voices have been quicker to raise the alarm, noting how FRTs can even exert a stronger ‘chilling effect’ than other surveillance tools (GCN, 2020) and disproportionately target minorities (Amnesty International, 2021).
The lack of a thorough conceptualisation of the specificities of the right to freedom of assembly and its interplay with FRTs is also apparent by the state of current legislative proposals put forward by European institutions. While the Council of Europe’s legal framework does not ban facial recognition, the EU has put forward a proposal for a draft EU artificial intelligence (AI) act, unveiled in April 2021, to significantly limit the use of biometric identification systems including facial recognition. If the Act passes in its current form, the use of real-time facial recognition systems in publicly accessible spaces for the purpose of law enforcement would not be prohibited as such, allowing Member States to set up judicial or administrative authorisations as deemed appropriate for identifying terrorists, finding missing children, and fighting serious crimes. But it remains contested in situations of public peaceful assemblies despite the seemingly legitimate reasons of use.
As EU authorities take steps to regulate FRTs and the European Court of Human Rights admits cases of FRT being used in public assemblies, this paper aims to contribute to a firm understanding of facial recognition as a direct restriction to freedom of assembly in its own right. We propose a framework of analysis which, by reflecting on the innate tensions between the right to freedom of assembly and security policies, seeks to refocus the debate on the regulation of FRTs on its impact on freedom of assembly and the urgent need to safeguard it adequately.
10. Celeste, Eduardo (Dublin City University). The Digital Constitutionalism Teaching Partnership: Connecting Virtual Learning Spaces with an Interdisciplinary Toolkit
Keywords: legal education, teaching partnership, digital rights, interdisciplinary teaching toolkit, virtual learning spaces
Abstract. The paper analyses the benefits and challenges of setting up an international and interdisciplinary teaching partnership on digital constitutionalism. Since 2019, the Digital Constitutionalism Teaching Partnership has regrouped seven European universities (Dublin City University, Helsinki, Salerno, Bremen, Padova, Goldsmiths London and Maastricht) offering synchronous and asynchronous digitally-connected learning spaces to more than 200 students every year. The partnership presents an interdisciplinary character, aiming to advance students’ knowledge and practical skills in the field of digital rights from multiple disciplinary perspectives (law, political science, sociology, communication studies, international relations).
The first part of the paper reconstructs the context where the teaching partnership emerged, clarifying the positioning of this pedagogical initiative in the existing literature. The partnership did not originally present any structural support from the partner universities. It stemmed from occasional collaboration among researchers working in the field of digital rights and constitutionalism who teach undergraduate and postgraduate modules covering IT law or digital policies. The Covid-19 pandemic facilitated the organisation of synchronous online sessions, which were subsequently maintained in the 2021-2022 hybrid edition of the teaching partnership and integrated with in-person activities, also involving travelling abroad. The denomination ‘teaching partnership’ does not receive a univocal definition in the literature. References to teaching partnerships in law were not found. The paper will then propose a definition of the main elements characterising the digital constitutionalisation teaching partnership.
The second part of the paper investigates the benefits of the teaching partnership both for students and for teachers. From a student perspective the teaching partnership contributes to enhance the internationalisation of classes, interculturality, interactivity, motivation of students and acquisition of digital skills through a variety of assessment methods. From the perspective of the teachers, this initiative allows to develop a common teaching toolkit, benefitting from practices used across various disciplines, to crowdsource forms of assessment, and promote student involvement in public policy initiatives. In various editions, indeed, the teaching partnership relied on research-led activities to involve students in the drafting of policy documents that were later presented at public events with relevant law and policy-makers as well as other stakeholders.
The third part of the paper examines the multiple challenges of this initiatives. Logistical issues to solve included timetabling lectures across multiple time zones and the use of online platforms, in particular given a stratification of edtech platforms following the Covid-19 forced digitalisation of teaching. Data protection emerged as a complexity to address due to the exchange of student data for assessment purposes. The absence of structural funding for these types of initiatives at the partner universities originally led the lecturers involved in the partnership to cover student travel expenses with their own funds. The recent introduction of funding under Erasmus+ for Blended Intensive Programmes is explored as a potential solution to this issue. In particular, the paper examines the differences between the activities so far performed in the context of the teaching partnership and the definition of Blended Intensive Programmes according to the 2022-2023 Erasmus+ call.
11. Chomczyk Penedo, Andres (Vrije Universiteit Brussel). The Regulation of Data Spaces under the EU Data Strategy: Towards the ‘Act-ification’ of the 5th European Freedom for Data?
Keywords: data spaces, data protection, free flow of data, digital single market
Abstract. Data, both personal and non-personal, is necessary for the development of a data-driven digital economy; while generating datasets can be costly, making existing ones available by sharing them can be a sensible alternative to pursue. But how exactly can this take place? The current landscape around databases is one of fragmentation and lack of interconnection. In this sense, data spaces have been proposed in the EU Data Strategy as the necessary infrastructure to enable an EU flourishing digital economy. However, it is surprising that their regulation is, for the time being, relatively scarce and spread out, with notable sectoral exceptions such as the European Health Data Space.
Not all information is equal as it is possible to distinguish between personal and non-personal data. Different rules have emerged in the last decade, and even more so in the last years, to tackle data-related legal challenges: from the General Data Protection Regulation up to the Free Flow Regulation but also including the recent Data Governance Act or the Data Act proposal. However, the boundaries between these two legal categories are becoming blurrier, particularly as non-personal can be combined using AI-powered tools to reidentify to whom such data belongs.
While all these rules put in place requirements to process information in a manner that does not compromise fundamental rights, those very same rules acknowledge that data should flow between those who need it. In this sense, certain EU lawmakers have proclaimed the emergence of a 5th European freedom relating to data, information, and knowledge flows, depending on the definition. It would be enshrined alongside the traditional freedoms for persons, goods, services, and capital flows that have constituted the core of the European Union and its economic integration process. As part of the Digital Single Market, data spaces might be necessary for the realization of this novel freedom.
While the catalogue of data-related rules is expanding, particularly through what some scholars have called ‘act-ification’, alongside ‘GDPR mimesis’ and ‘EU law brutality’, data spaces, despite their supposed key role, remain substantially in the regulatory shadows. While there is a proposal for the European Health Data Space on the table, the particularities of the health services sector, such as the abundance of sensitive data or patient-practitioner confidentiality duty, may cast doubts over whether this will be a blueprint used for further regulation.
Therefore, this contribution will explore what are data spaces from a regulatory perspective by tracing their policy origins and current reception in existing legislation. By doing so, we will try to answer whether the EU lawmaker is effectively consolidating a 5th European freedom -a right to free data flows- and how it reconciles with existing fundamental rights, particularly the right to personal data protection. Ultimately, this contribution will seek to answer whether a general-purpose regulation for data spaces is necessary to ensure a certain degree of cohesion across sectors.
12. Clifford, Damian (Australian National University) and Paterson, Jeannie (University of Melbourne). Banning Inaccuracy
Keywords: Data protection, Consumer protection, Consent, Bans, Paternalism
Abstract. Ubiquitous personal data processing and the effects of personalisation have raised well-documented concerns. Concerns have also been explored relating to the accuracy of consumers services derived from intensive data processing. These kinds of services – from step counting, to emotion detection and mood monitoring, are typically presented in the form of ubiquitous devices or products. But they raise real concerns about the efficacy or accuracy of what is being offered, which typically rely on approximations and predictions that are unverified and opaque.
How does the law treat such concerns? And what is the appropriate regulatory response? At the centre of these debates is the data subject/consumer, and their protection. The current regulatory approach assumes our capacity to choose as active market participants. However, several authors have focused on how individual consent established in data protection and privacy law as a means of legitimising the personal data processing inherent to such devices law is ineffective. There is also a growing literature focusing on how data protection and privacy and consumer protection (and indeed, contract law) can work together to bolster this ideal of the rational individual at the centre of these protections. This response typically relies on producing clearer forms of notice or disclosure and demanding stronger manifestations of consent. But many scholars have called for more paternalistic interventions.
In previous work we have highlighted that this does not have to be a binary choice: We believe that paternalistic interventions in the form of targeted bans would improve the environment in which consent can function properly without removing the fictional rational data subject/consumer entirely from the framework.
The aim of this paper then is to explore the circumstances in which a technology may be banned/blacklisted. To contextualise the discussion there will be a particular focus on products/services where there are clear and well-documented accuracy concerns, such as emotional artificial intelligence. The paper will highlight how, although relevant, data protection and privacy law is incapable of being the regulatory solution to the problem presented. The dangers need a regulatory approached focused not only on the protection of personal data but also on the downstream effects associated with the use of such data. This is the role of consumer law and policy.
Importantly, we believe that this does not eliminate the role for consent but instead may make it more meaningful when it is given. In other words, our argument is that there is a role for more paternalistic interventions such as bans but that this does not remove the role for the individual consumer/decision maker.
The paper will therefore analyse the role of consumer law in regulating inaccuracy. It will draw a distinction between inaccuracy as (1) a fault and (2) an inherent feature of consumer products and services. The aim is to frame the relevant considerations in identifying when bans may be justified. In doing so the paper will interrogate the challenges associated with the ex ante regulation of inaccuracy and the policy and theoretical debates inherent to any paternalistic intervention (i.e. but more specifically in the form of bans), when pursuing the primary goals of consumer law: promoting consumer autonomy and protecting consumer welfare.
13. Cooper, Zachary (VU Amsterdam). The Utility of Incoherence: How Legislating the Present Confuses the Future
Keywords: The Utility of Incoherence, Regulation of Emergent Technologies, Blockchain, Technology Within the Law, Algorithms in the Judiciary
Abstract. A glut of legislation has been passed in recent years in pursuit of greater control of internet architectures and the behaviour they harbour. This emergent law seeks to massage cybertechnologies in line with its in own ideologies. However, the relationship is a mutual one, and legislation is itself moot if it is not beholden to the functionality and ideologies of that which it seeks to regulate. As these architectures have become progressively more entangled and co-dependent, the law has become less malleable to other disparate emergent technologies. This web of legislation may therefore backfire, where it is fundamentally incapable of regulating architectures upon which its application is entirely incoherent. Thus, even legislation which is ostensibly tech-neutral will not be able to meaningfully regulate technologies which were not considered at the time of drafting. We may look to, for example, the fundamental incoherence of applying the General Data Protection Regulation to public permissionless blockchains.
Thus, where much discourse around emergent technologies is often built around whether a technology is going to be able to replace the functionality of a prevailing technology, the greater challenge to the law in fact comes with non-replacement. As disparate emergent technologies of increasing sophistication regulate behaviour through their own in-built privatized regulatory infrastructures, reinterpretation of existing legislation in an attempt to stretch its remit risks fundamentally endangering its pre-existing coherence. The greater the level of sophistication and depth of coherence between prevailing technological and legal infrastructures, the more this danger exacerbates. Thus, as with the private community resolution of the Decentralized Autonomous Organization (DAO) Hack, which actively avoided a judicial intervention that would have clarified the legal character of any number of new concepts and entities, we may find that we are more comfortable allowing these technological infrastructures to privately regulate themselves, rather than undermine the functionality of our current legislative web.
Thus, paradoxically, a greater density of cyberlegislation may in fact lead to a future of even lower regulatory control, as the increasing specificity of regulatory application can be more readily exploited by design, with multiple fringe architectures co-existing with functionalities intentionally abstracted from legal regulatory models. I refer to this speculative future as “the regulatory multiverse”, wherein a centralised controlled framework distracts from a peripheral landscape wherein fragmentation and legal incoherence abounds.
Is such a future improbable, fundamentally limited by our inability to develop technological architectures of requisite innovation or utility at a fast enough rate to create such confusion? Or will the inherent utility of incoherence be exploited by design, as it currently is by AI and blockchain technologies? And how can we avoid such a future if not through drafting ever more bespoke legislation, deepening the web of coherence to be exploited? Critically, if the law is to maintain its regulatory sovereignty over the future, it may need to disentangle itself from the architectures of the present.
14. Da Rosa Lazarotto, Bárbara (Vrije Universiteit Brussel). The Right to Data Portability: An Holistic Analysis of GDPR, DMA and the Data Act
Keywords: The right to data portability, GDPR, Digital Markets Act, Data Act
Abstract. The right to data portability is a right enshrined by the General Data Protection Regulation which aims to empower data subjects by giving them more control, giving them the right to obtain a copy of their personal data and the right to transfer data directly from one controller to another. Most recently, the Digital Markets Act and the Data Act Proposal also touched on the right to data portability, adding new nuances to this right. However, due to many factors – such as the lack of proper regulation, technical capability, and data protection deadlocks – the right to data portability has found little or no application in reality. Due to this innocuousness, data subjects are left in a grey zone, having little control over their data, benefitting data controllers which hold on to data that otherwise would be ported to other controllers. In this context, this paper explores the complementarities and conflicts between the right to data portability as enshrined in the General Data Protection Regulation, Data Markets Act and Data Act Proposal. Taking into consideration the underlying objectives of these Regulations, namely the protection of data subjects’ personal data, the regulation of digital markets and the development of the European data economy through the free flow of data. Through this paper, we propose not only to proceed with a comparative analysis of the right to data portability but to advance on a holistic analysis of the tangible application of the right and how these regulations might permit the application for the benefit of data subjects or maintain the status quo.
15. De Amstalden, Mariela (University of Birmingham). Future Technologies and the Law: Regulating Cell-Cultivated Foods
Keywords: future technologies, sustainability, global governance and regulation
Abstract. It is no longer a science fiction tale. In December 2020, a table of four sitting in a luxurious Singaporean private members club was served the world’s first dish made with lab-grown meat ever sold in a restaurant. Garnished with bean puree and accompanied with a bao bun and waffles, these chicken nuggets of the future had only been approved for sale by the Singapore Food Agency a mere weeks before, after a lengthy (if opaque) inspection process that had lasted over two years. The moment was significant because it marked the operationalisation of the very first regulatory approval anywhere in the world for foods produced using cell-cultivation technology.
Such future technologies are redefining fundamental elements of our life, and these innovations promise to change the way we perceive, behave, socialise and even eat. Lab-grown, cell-cultivated or ‘cultured meat’ is estimated to become widely available for sale directly to global consumers imminently. Provided that the technological scale-up for mass consumption is successful, the multitrillion global meat market appears to be on the verge of a disruption unlike anything seen in times past. Cultured meats, with its cells being grown in bioreactors instead of slaughtering animals, have been praised with the potential to display far-reaching effects on climate change mitigation, food security and animal welfare.
This presentation explores the role of global governance mechanisms, in particular experimentalist governance, in responding to the array of issues that future, transformative (bio)technologies pose for the law: from responses to risks in light of scientific uncertainty, labelling and consumer protection, restrictions on international trade and intellectual property, to raising ethical and philosophical questions. While there appears to be a lack of an integrated understanding about the nature, causes and implications of regulatory shifts (if any) addressing future technologies, this presentation asks whether and to what extent effective, agile and responsive legal frameworks can and should be designed to promote innovation that aims at tackling pressing global challenges.
Based on the premise that scarce responsiveness in regulatory frameworks has the potential to significantly stifle innovation, this presentation will also deliberate about the potential to construe cell-cultivation technology as a ‘technology of abundance’. It ultimately reflects on the continuing emergence of novel forms of global governance – understood as a highly dense and complex cooperative system of entities that are public and private, international and regional – in spite of increasing tendencies towards geopolitical fragmentation and de-globalisation.
16. De Conca, Silvia (VU Amsterdam). The Present Looks Nothing like The Jetsons: A Legal Analysis of Deceptive Design Techniques in Smart Speakers
Keywords: deceptive design, dark patterns, data protection, consumer protection, Smart speakers
Abstract. This paper maps the deceptive design techniques deployed by Alexa and Google Assistant to manipulate users into sharing more data and buying products, discussing how the GDPR, UCPD, DSA, and AI Act apply to it. The goal is to identify potential overlapping, conflicts, or gaps in the law, proposing solutions to empower users and foster a healthy digital market.
Amazon Alexa and Google Assistant are virtual assistants (VA), that is a software that allows users to operate smart devices via voice commands. VAs are embedded into smartphones or purpose-built speakers, and are marketed to consumers as the personal assistants that will simplify the lives of the whole family. The natural language interaction and the capabilities of VAs are powered by advanced machine learning and by the collection of large amounts of personal data of users. In order to build a long-term relationship with the users and make sure they share data on an almost constant basis, VAs have been designed to prompt and stimulate individuals to interact with them, or to act on their prompts by purchasing goods or visiting web pages.
This persuasion is obtained using various deceptive design techniques (also known as dark patterns). Many deceptive design techniques have already been identified in relation to websites, especially e-commerce and social networks. However, because of the vocal interaction, some of the deceptive design techniques used by VAs present innovative features and a peculiar functionality.
By mapping the techniques used by Amazon and Google, the paper identifies the most problematic and undesirable ones, ordering them into categories based on the most popular deceptive design typologies: Vocal Prompts (given during a conversation with the users); Visual Prompts (given while the VA is dormient on those devices equipped with a screen); Strategic Replies containing ‘personalised’ offers; and the Peer-Like Relationship established between the human and the machine, aiming at profiling the user for preferences and vulnerabilities. Based on this distinction, the paper analyses a selection of secondary EU law provisions, focusing in particular on the GDPR, UCPD, DSA, and the proposal for AI Act.
The contribution of this paper is two-fold: on the one hand it identifies several uncertainties in the application of the abovementioned legislation to VA deceptive design, offering the chance to reflect on the systemic gaps existing in the regulation of deceptive design at European Union level.
On the other hand, by focusing specifically on virtual assistants, this paper shows how these very popular, yet still new, devices are changing the ways in which individuals experience the internet. The paper unveils that the vocal interface used by VAs requires some adjustments in those provisions and rules designed with screens, monitors, or even paper in mind. This is particularly important to empower users and protect the (digital) rights of individuals, especially in the light of the diffusion of the Internet of Things and the smart home, and the subsequent blurring of the boundaries between the online and offline sphere.
17. Degalahal, Shweta Reddy (Tilburg University). Reconsidering Data Protection Framework for Use of Publicly Available Personal Data [8840]
Keywords: Privacy, Data protection, Digital public sphere, Clearview AI, Publicly available personal data
Abstract. In early 2020, news reports of Clearview AI developing a facial recognition tool that was trained using publicly available images of individuals from social networking sites started surfacing. Soon after, Clearview AI’s practices were challenged across the European Union and United States. The common thread across most of the orders for fines has been the reiteration of the need to provide privacy notice to individuals and the importance of transparency of processing operations of entities. Despite the vast theoretical and empirical literature on consent fatigue and the limitations of the transparency ideal, the obligation to provide a privacy notice and seek informed consent seems to be the primary mode of protection offered to publicly available personal data.
Prior to 2020, debates on privacy in public started gaining traction after introduction of surveillance cameras and facial recognition on public streets. These debates focused on privacy invasive measures using digital technologies that lead to constant surveillance in physical public spaces. However, such constant surveillance extends to digital spaces as well. The interconnected and interoperable nature of digital platforms combined with the ability to create vastly detailed sensitive profiles of individuals through data aggregation techniques makes defining what is public for the digital space complicated and worthy of further research. The fact that Clearview AI’s web scraping activities were challenged by data protection authorities only after news reports of their activities begs the question if the current safeguards for publicly available personal data are adequate. The paper will examine this question through the jurisdiction of EU and US. EU and US have been identified as in scope for research as both these countries represent different approaches to privacy protection i.e. privacy as control and privacy as social norms respectively. The scope has been narrowed down to fourth amendment jurisprudence from the US and the GDPR in the EU as these largely reflect the attitudes of the regulator and legislator towards protections offered to publicly available personal data. The adequacy of these protections will be evaluated against the Contextual Integrity framework proposed by Helen Nissenbaum to examine if norms surrounding contextual disclosure have been adequately translated into the legal frameworks. Based on US’s approach towards data protection and the contextual integrity framework, additional measures that can enhance said protection in the EU will be proposed.
18. Diker Vanberg, Aysem (Goldsmiths, University of London). Application of EU Competition Law to Artificial Intelligence and Chatbots: Is the Current Competition Regime Fit for Purpose?
Keywords: AI, chatbots, EU Competition Law, DMA, GDPR
Abstract. The development of machine learning, complex algorithms and advancements in big data processing have led to innovative applications of Artificial Intelligence (AI). For instance, ChatGPT, a chatbot released in November 2022, has captivated online users with its ability to answer a variety of complex questions in a logical and articulate way albeit not always accurately. As technology advances and the cost of storing and analysing data gets lower, more companies are investing in machine learning to assist in pricing decisions, planning, trade, and logistics. With the advent of the Internet of Things (IoT ), our daily activities such as our consumption and transport habits are increasingly collected and used/exploited by companies. These developments raise a plethora of challenging legal and non-legal questions with regard to the relationship between man and machine, control, and lack of control over machines and the accountability of these machines for their activities.
The use of AI is likely to give rise to a wide range of competition law issues. First, if a few market players, such as Google, and Facebook, dominate the development and deployment of AI, these companies may leverage their existing market power to drive out competitors leading to increased market concentration. Second, companies with a dominant market position may use AI to engage in predatory pricing to drive out competitors. Third, AI may be used for price fixing, which would reduce competition and harm consumers, as evidenced in Case 5023, in which the Competition and Markets Authority has issued a penalty to online sellers of posters of frames, as they have used automated re-pricing software to implement a price-fixing agreement not to undercut each other’s prices on Amazon UK. Fourth, the development of intelligent chatbots and AI may require significant investment which could create barriers for new entrants. Finally, companies may use AI to bundle or tie products and services to other products and services to create barriers to entry for new competitors.
In this context, this paper concentrates on competition law issues that are likely to arise by the development of AI, with a particular focus on intelligent chatbots, and analyses whether EU Competition Law is fit to deal with the challenges posed by AI. The paper argues that EU competition law combined with other legal instruments such as the DMA and the GDPR is fit for purpose. Nevertheless, given the emerging nature of AI, the European Commission should lead the way to develop further guidance in this field in cooperation with other stakeholders such as data protection, consumer protection agencies and technology companies.
19. Dinev, Plamen (Lecturer, Goldsmiths, University of London). Consumer 3D Printing and Intellectual Property Law: Assessing the Impact of Decentralised Manufacturing
Keywords: Intellectual property law, 3D printing, Empirical, Socio-legal, Disruptive technology, Copyright, Patents, Trade marks
Abstract. As a technology which allows users to ‘convert’ informational content into tangible objects on a decentralised basis, 3D printing may call into question well-established intellectual property (IP) norms and policies. While the technology is not new in the strict sense, its consumer side is certainly novel and fascinating. Desktop 3D printing has the potential to democratise production, equipping users with manufacturing tools and allowing them to make various creative decisions. But it is also this particular aspect of the technology which is especially controversial from the perspective of IP law: the key issue here is not how things are done, but who does things. Unlike the file sharing issues experienced in the past, 3D printing goes beyond copyright and allows users to interfere with all major IP rights. As the lack of legal certainty in this area has already raised concerns among stakeholders, this paper draws on a combination of legal and empirical methods with the aim of contributing to evidence-based policymaking.
There are a range of important legal and policy questions raised in this context: what measures can be adopted to mitigate the risk of decentralised infringement, especially as 3D printing allows traditional methods of control to be circumvented? Does the technology further challenge the IP framework’s ability to maintain artificial scarcity in the digital age (in an environment where copying is the norm, not the exception) and how should the law respond? It is also unclear whether IP law provides adequate protection for 3D printing design files, considering that they may contain a diverse range of products protectable by different rights and some areas, such as patent law, have not yet experienced the full force of digitisation. What are the norms, practices and attitudes towards IP and licensing within the community? Is IP actually a concern?
To address these complex doctrinal and normative questions, the paper first looks at the wider socio-economic implications of the technology and its ‘disruptive’ potential, before considering the challenges it poses to IP theory and practice. It examines the novel legal issues that 3D printing raises in the context of UK and EU IP law, drawing on relevant provisions, case law, and taking into account key technological aspects which are commonly overlooked in the legal literature. The paper then presents the results of the author’s empirical research, offering one of the most comprehensive case studies on the topic to date. Through two streams of data collection involving 171 research subjects from the UK, EU and US (including industry representatives from some of the world’s leading 3D printing companies, engineers, lawyers, and end users), it aims to assess the urgency for reform, capture the experiences and views within the community, and gauge the extent to which IP is a concern. The conclusion is prescriptive in nature, offering specific solutions and recommendations.
Disclaimer: The empirical component of this study was completed as part of my PhD which was funded by the City Law School and the Modern Law Review
20. Esposito, Maria Samantha (Politecnico di Torino). Regulatory Perspectives for Health Data Processing: Opportunities and Challenges
Keywords: health data, data protection, fundamental rights
Abstract. The exploitation of the vast amount of health data available in Europe could represent a huge opportunity for healthcare delivery and innovation. The COVID-19 pandemic highlighted the value of effective access to and sharing of health data, underlying the importance of stronger coordination among European countries to protect people’s health better. To overcome this need, the EU legislator in 2020 put the basis for a solid European Health Union, in order to improve EU-level protection, prevention, preparedness and response against health emergencies.
The European Health Data Space (EHDS) is the first proposal to address this need, supporting the digitalisation of health data and promoting their availability, access and sharing at the European level, to both public interests and the interest of patients. At the same time, this Regulation has important interactions with other existing and forthcoming European data regulation initiatives (e.g. Data Act, GDPR, Data Governance Act, Digital Market Act, the Artificial Intelligent Act), as well as with various national health data-related policies.
Against this background, despite the EHDS recognising the importance to ensure a clear framework as well as coherence and consistency between all data policies and regulations, several provisions in the current proposal are unclear or seem inconsistent with other legislative measures. This is the case, for example, of the definition of ‘data holder’ in the EHDS Proposal, in relation to which the interplay with the definition of ‘data holder’ provided in the Data Act and in the DGA is unclear. Similarly, issues arise from the definition of ‘data user’ in the Proposal and its relationship with the definition of ‘data recipient’ in the same Proposal, as well as with the definition of ‘recipient’ in the GDPR and the notion of ‘data user’ in the DGA.
As a result, this multi-layered collection of provisions in the field of health data leads to legal uncertainty and negative outcomes, both for patients and for regulators and businesses. The former will be inclined not to share their data and the latter will be forced to bear significant enforcement costs stemming from various policies.
This paper will discuss the critical issues emerging from the current European Health Data Space proposal stressing the need for greater clarity in the definitions and rules laid down in the Regulation and in the interplay between its provisions and other data-related initiatives. Finally, the paper provides some suggestions to address this complex regulatory scenario in the field of health data processing to ensure the effective use of data and, at the same time, protect human rights and freedoms.
21. Faturoti, Bukola (University of Hertfordshire) and Osikalu, Ayomide (Ayomide Osikalu & Co, Lagos, Nigeria). When Bitterness Mixes with Romance: The Weaponisation of Pornography in Africa
Keywords: revenge pornography, cybercrime, Africa, blackmail, sextortion
Abstract. The steady inroad of digital technology in Africa has orchestrated a cultural shift in the creation and consumption of creative content. Thanks to the proliferation of mobile phones and the penetration of internet services. An average African netizen has developed a penchant for recording social, political, and cultural activities. This attitude has now extended to creating homemade or amateur pornography videos. When being recorded, the existence of the passion video is only known to the participants until the relationship goes sour, except when it is inadvertently leaked. In the last decade, Africa has witnessed a rise in incidences of revenge pornography. Although it is common among celebrities and public figures, the victims are not limited to any societal status. Revenge pornography is an emerging variant of cybercrime in Africa. Despite the outcries that usually accompany its release, it is less theorised, and the law surrounding it is underdeveloped compared to other cybercrime, like financial crime. The research investigates the state of revenge pornography under the law of selected African countries. It explains why there is a paucity of case law despite the growing incidences. It argues that the genderisation of the crime may also leave some members of society without protection
22. Flaherty, Ruth (University of Suffolk). ChatGPT: Can a Chatbot be Creative?
Keywords: artificial intelligence, creativity, copyright, machine learning, text and data mining, copyright infringement
Abstract. The way copyright applies to derivative reuses of creative works such as fanfiction by humans is well known, having been updated recently by the CDM Directive and case law such as Shazam Productions. However, what is less well known is how this applies to similar works generated by Artificial Intelligence. Can a machine learn how to be ‘creative’ enough to attract copyright protection? Furthermore,’style’ is not a protectable characteristic in copyright due to the idea/expression dichotomy, so does this mean derivative works written by bots should be permissible? This presentation will use a sample of AI-generated fanfiction from Chat GPT as a case study to analyse the ways current copyright law and AI laws apply to machine learning outputs. Text and data mining legislation and laws relating to how the AI ‘learns’ from its material will be analysed to explore whether there is any harm created by this form of reuse, and if so who suffers the harm, and who is responsible for it. This will add to the literature surrounding the use of artificial intelligence.
23. Fras, Kat (Vrije Universiteit). Article 22 of the GDPR: In Force Yet Redundant? The Relevance of Article 22 in the Context of Tax Administrations and the Automated Decision Making
Keywords: gdpr, tax, automated decisions
Abstract. Since the adoption of the GDPR, the process of automated decision-making has been framed within the legal boundaries within the article 22 of the GDPR. In the year 2023, the automation of tasks within the tax authorities in the entire EU has become the new normal. More precisely, a plethora of researchers demonstrates that up to 90% of the decision at the tax administrations are made in an automatic manner. Each of such decisions constitutes a certain influence on the positions of the taxpayers, varying in their legal effects on them.
GDPR as a legal act is applicable to the workings of public administrations, including tax authorities. So far, there have been many cases at the national as well as EU level where taxpayers invoke the GDPR in the context of their legal proceedings regarding the lawful processing of personal data, data transfers, and others in regard to art. Art 5 and Art. 15 of the GDPR. These articles offer a certain degree of de iure protection to the taxpayers.
In contrast to these articles stands art. 22 which offers legal safeguards for taxpayers in regard to automated decisions. However, in fact, many of the Member States have applied derogations to this article, including the Netherlands and Poland. According to the Dutch derogation of the art. 22 “Article 22(1) shall not apply if the automated individual decision-making, (…) necessary to comply with a legal obligation imposed on the controller is necessary for the performance of a task carried out in the public interest. 2 (…) the controller appropriate measures for protection of the rights and freedoms and legitimate interests of the data subject.” According to the Polish derogation “The processing of data may take place in an automated manner, which may involve automated decision-making (…) This applies to the following cases: assessing the risk of violation of the law, where this assessment is made on the basis of the data declared in the submitted documents, based on established criteria, assessing the risk of violation of the law, where this assessment is made on the basis of data obtained from publicly available registers and social networking sites, based on established criteria. In the above cases, is automatic classification to the risk group, where qualification to the group of unacceptable risk may result in a change of relationship and taking additional actions provided for by law.”
In both derogations, I identify certain issues that I intend to investigate in this paper. The main question of this paper is whether art. 22 of the GDPR offers any substantial (de facto) legal protection of the taxpayers in light of the derogations applied by Member States.
24. Fteiha, Bashar (University of Groningen). The Regulation of Cybersecurity of Autonomous Vehicles from a Law and Economics Perspective
Keywords: Cybersecurity of Autonomous Vehicles, Cyberattacks, Law and Economics, Incentives Regulation
Abstract. While Autonomous Vehicles (AVs) promise to generate considerable benefits to the society, they are still fraught with many cybersecurity risks which make their introduction onto the European roads far from guaranteed. More specifically, the fact that AVs operate using highly sophisticated computerized systems and software as well as rely on connectivity and network communications makes them highly vulnerable to cyber-attacks. In particular, it appears disconcerting that deeply rooted defects in AVs software or network systems could be exploited by hackers for malicious intentions. Additionally, the gravity of the current situation is further exacerbated by the absence of a tailored legal framework regulating the cybersecurity of AVs. Central to this latter view is the fact that the successful uptake of AVs is significantly reliant on the introduction of a legal framework addressing the cybersecurity risks that come along with them. Therefore, the key question that arises in this context is how should this legal framework be designed and formulated at the European level? In answering this question, the use of Law and Economics approach, more specifically the theory of optimal enforcement could serve as a useful tool in developing a regulatory framework for the cyber-security of AVs. Law and Economics views legal rules as a system of incentives guiding future actions to achieve a given purpose. This is particularly relevant in the context of cybersecurity of AVs because they are vulnerable to cyberattacks due to the absence of the security-enhancing incentives that motive the responsible actors to sufficiently protect their vehicles against cyberattacks. Therefore, the problem of cybersecurity of AVs is not only a question of technology, but one where human factor is equally important because security-enhancing incentives are lacking in the automotive industry. Worth noting, the main stakeholders involved in the design and development of AVs, including the manufacturers of AVs, component makers and suppliers have different roles and incentives with respect to the cyber-security of the AVs. Hence, this contribution intends to employ the theory of optimal enforcement to examine which legal instruments (private enforcement through liability or public enforcement through safety regulation or perhaps a smart mix) will be more suitable for providing the necessary security-enhancing incentives that gear the actions of the main stakeholders towards ensuring that AVs remain secure and resilient in face of cyberattacks. Accordingly, the primary focus of this contribution is to examine an incentives-based regulatory framework can be structured with the aim to guide the actions of the key stakeholders towards enhancing the security of such vehicles.
25. Gordon, Faith (Australian National University). Rights of Children in the Criminal Justice System in the Digital Age: Insights for Legal and Judicial Education and Training
Keywords: Children, Digital, Education
Abstract. The impact of the digital age on the justice system, in particular social media platforms and new technologies is a significant concern, yet it is under-researched in the context of children, the principle of ‘open justice’ and the work of Children’s Courts in criminal matters and in university and practice training for lawyers. In addressing contemporary concerns that exist, this scoping study will explore the existing tensions between children’s rights, the open court environment, the principle of ‘open justice’, and digital technology, from the perspectives of professionals and key stakeholders. This presentation is linked to a project that is funded by the AIJA and it identifies how the work of the Children’s Courts is portrayed and what the Courts, judicial officers and key stakeholders identify as the opportunities and challenges of emerging digital technologies and social media platforms on the work of the Children’s Courts and the principle of ‘open justice’. The core themes of exploration are therefore: representations of children in contact with the criminal justice system; representations of the work of the Children’s Court related to criminal proceedings; opportunities and challenges of the digital age for the Children’s Court. The paper will present some key insights and implications for legal and judicial education and training in the digital age.
26. Griffin, James (University of Exeter). The Challenge of Quantum Computing and Copyright Law: Not What You Would Expect
Keywords: Quantum, Copyright, Balance, Reform
Abstract. Quantum computing poses a challenge to existing copyright law, but not in the ways in which we are accustomed. Increasing iterations of technologies since the inception of copyright law have seen an increase in the ability to make easy copies. However, quantum computing potentially turns that on its head, making it potentially more difficult to make copies.
Quantum computing was initially founded as a means by which to answer complex questions of quantum mechanics. Like quantum mechanics, its very basis is one of uncertainty. A quantum computer can work exponentially faster than a digital computer. This is because binary (such as 01010101) is replaced with probabilities, allowing for faster number crunching and faster execution of code.
Quantum data is unique, due to the infinite possibilities with quantum probabilities. This uniqueness means that, whilst it is possible to make copies that appear exact, there will be the ability to establish if something is a copy or not within the code itself. Indeed, this author submits that there are two likely outcomes from quantum computing:
1) An enhancement of existing proprietary copyright boundaries
2) An increase in the tracking and tracing of content
For (1), this occurs because every ‘copy’ will contain unique differences, meaning it will be far easier to detect unauthorised copies. For (2), an increase in tracking and tracing will occur due to the technology itself, but also because of the specific legal protection that currently exists under s.296 CDPA 1988 (and other similar provisions in other jurisdictions stemming from Art 12 WIPO Copyright Treaty 1996).
In summary, quantum computing will enhance rather than decrease the enforcement of copyright law. This is in contrast to digital technology, which required legal amendments. Given that digital technology is looking increasingly obsolete due to difficulties in small scale manufacture, digital technology could even be described as a ‘false turn’ in our understanding of the propensity of technology to encourage easy reproduction of copyright works. This is even more so considering other upcoming technologies, such as biological compute chips (using synthetic neurons) & photonic computing. These also are likely to share the same characteristics of quantum computing when it comes to copyright law.
The argument of the paper is that regulators should be extremely wary of any attempts to extend or enhance current copyright regulation over newer non-digital technologies. If anything, attention should be paid to the question of whether existing laws might combine with these newer technologies in ways that might undermine the copyright balance, by considerably strengthening the positions of existing right holders.
27. Guan, Taorui (The University of Hong Kong). Intellectual Property Legislation Holism in China
Keywords: intellectual property legislation, China, innovation policies, pluralism, holism
Abstract. China’s intellectual property system has long been of interest to Western scholars. However, little research has analyzed it systematically. Existing studies mainly concentrate on whether the system provides property rights to intellectual products and whether it protects these rights effectively. While few would deny the importance of intellectual property protection, limiting their attention to this aspect has kept scholars from developing a broader understanding of China’s intellectual property system. Since the Chinese government adopted the National Intellectual Property Strategy Outline in 2008, it has taken a holistic approach to building the system, meaning that its legislation has come to focus not only on the provision of property right protection to intellectual products, but also on the creation of intellectual products, the implementation of intellectual products and their property rights, the management of them, and the supply of intellectual property-related services.
To provide a more comprehensive view of China’s intellectual property system, this Article reviews the legislative history of China’s intellectual property law and presents the reasons for the Chinese government’s adoption of this holistic approach to legislation. It also analyzes this approach through a systematic survey of intellectual property laws that the Chinese government has enacted, at both the central and local levels. It demonstrates that the holistic approach to intellectual property legislation is a manifestation of the Chinese government’s deliberate adoption of pluralistic innovation policies with the goal of systematically enhancing its innovation capacity. While the effectiveness of this approach remains to be seen, it highlights issues that are also relevant to policymakers in developing countries. The Article also describes the role that the Chinese government plays in innovation through its intellectual property system. While this state-driven model of innovation is able to concentrate resources in critical technological areas, it is subject to challenges related to decision-making and rent-seeking.
28. Guillén, Andrea (Institute of Law and Technology, Faculty of Law, Autonomous University of Barcelona). Automated Decision-Making under the GDPR: Towards the Collective Dimension of Data Protection
Keywords: Data protection, Automated decision-making systems, Profiling, Algorithmically-determined groups, Collective harms
Abstract. Automated decision-making systems are used both in the public and private sector to make decisions about individuals in multiple areas with legal, economic, and societal impact, including education, social benefits, criminal justice, employment and finance. The growing deployment of such systems has been facilitated by the increasing ability to collect and process vast amounts of personal data. Hence, the General Data Protection Regulation (GDPR) has been considered a useful tool to deal with algorithmic harms arising from automated decision-making systems. The GDPR is particularly relevant to the regulation of these systems for Article 22 specifically addresses “Automated individual decision-making, including profiling”.
The literal text of this provision illustrates how data protection laws are commonly based on an individualistic paradigm. Yet the way automated decision-making systems operate challenges the foundations of individual data protection rights, in favour of a collective approach. Automated decision-making systems make use of profiling techniques to analyse clusters of people with allegedly shared properties rather than individual behaviour and make decisions upon. The nature of such algorithmically-determined groups renders individual rights largely ineffective.
Members of these ad hoc groups do not know (i) that they are part of the group; (ii) who else is in the same group and, consequently, cannot interact with other members; (iii) what other groups exist and how they are treated in comparison to those other groups; and, (iv) the consequences that belonging to that group has on their chances in life.
Such lack of awareness calls for an additional layer of protection at a collective level to overcome the limits of individual rights set out in data protection laws. In particular, it begs the question, does the GDPR address the collective dimension of data protection?
This article contends that, although data subject’s rights related to automated decision-making are insufficient to provide collective protection, other GDPR provisions –representative bodies, data protection impact assessments and certification schemes– could prove useful. While these provisions appear to be most promising for safeguarding collective interests, they are flawed. Hence, proposals to address their shortcomings are also provided.
The role of representative bodies in safeguarding collective interests could be strengthened if prior data subjects’ mandate were not required. Civil society bodies would significantly benefit from access rights that could be introduced through the non-exhaustive list of rights under Article 22(3).
DPIAs could serve as a robust mechanism to reinforce the collective dimension of data protection. Meaningful stakeholder’s consultation and disclosure of genuine, relevant information ought to be strongly encouraged. Lastly, certification schemes provide certification bodies with holistic access to the system, which could thus become effective watchdogs in specific fields.
The GDPR does provide the building blocks of the collective dimension of data protection. However, drawbacks need to be addressed to significantly enhance protection at the collective level in the GDPR. Future research should focus on how this collective dimension could benefit from other fields, such as non-discrimination and consumer law, and how these could aid at filling the gaps of the GDPR at the collective level.
29. Gulczynska, Zuzanna (Ghent University). Processing of Personal Data by International Organizations and the Governance of Privacy in the Digital Age
Keywords: processing of data by international organizations, data protection, right to privacy, international governance of privacy, extraterritoriality of EU law, GDPR
Abstract. The recent reform of the EU’s data protection framework has attracted global attention due to the broad scope of application of the General Data Protection Regulation (GDPR), as well as its strict data transfer rules. These two features have resulted in the de facto imposition of European rules on entities outside the EU’s jurisdiction exporting its standards globally, which, in turn, has triggered many questions about both the legitimacy of the “Brussels effect” and, more generally, effective governance of privacy in a borderless digital context.
A related topic that has received far less attention is that the GDPR has, for the first time, expanded the scope of data transfer provisions to include transfers to international organizations (IOs). While this can be justified by concerns about the comprehensiveness of the protection provided to data leaving the EU, it brings the “extraterritoriality of EU law” to another level, creating potential conflicts not only with other national laws, but also with international law. Concerns about the new rules have been expressed by the United Nations (UN), as IOs – also those from the UN family – have been regularly asked to comply with EU data protection standards since the GDPR came into force and at multiple occasions were faced with refusals to share data from EU actors.
This phenomenon raises new issues in the global discussion of informational privacy and its place in international law governing IOs, inter-States relations and regulatory approaches to the digital environment.
The issue of data processing by IOs in the broader (legal) context in which they operate has received limited scholarly attention. When the topic is addressed, it is often piecemeal. Indeed, the existing literature on the subject takes either the viewpoint of IOs and their need to process data to fulfill their mandate; or the perspective of municipal (typically EU) law, discussing the legitimacy (or lack thereof) of EU applicability claims in light of EU constitutional framework, particularly fundamental rights. What is missing, however, is a combined perspective of both international and national legal orders and their assessment against the broader backdrop of digital governance and informational privacy.
The Article aims to fill this gap. It combines both the perspective of international law and that of municipal law on the processing of data by IOs. What balance should be struck between the functional autonomy of IOs and the (indirect) imposition by States of their national standards in this regard? Can such an approach be justified under international law by the objective of advancing the fundamental right to privacy? Or should this approach be considered unlawful given the international community’s lack of agreement on the content of the right to informational privacy? The Article discusses these issues against the broader background of the search for suitable solutions to regulate the borderless digital sphere.
30. Gupta, Indranath (O.P. Jindal Global University, India) and Naithani, Paarth (O.P. Jindal Global University, India). Recent Trends in Data Protection Legislation in India: Mapping the Divergences with a Possible Way Forward
Keywords: Data Protection, GDPR, Digital Personal Data Protection Bill 2022, India, EU
Abstract. India is yet to have a comprehensive data protection legislation that would cater to various issues relating to the processing of personal data. The recent passage of the Digital Data Protection Bill in 2022 is the newest endeavour in the process of making the first data protection legislation in India. The Bill is open for public consultation. The idea of data protection is not an unheard concept in India, having a background of more than a decade. In the last decade or so, several attempts have been made to provide answers to the ever-increasing questions relating to data protection measures assuring privacy to individuals. The first data protection intervention in India happened as early as 2008 through the amendments to the Information Technology Act 2000. The amendments introduced Sections 72-A (Punishment for Disclosure of information in breach of lawful contract) and 43-A (Compensation for failure to protect data). After that, there were several attempts including but not limited to the passage of The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules). In 2017, the Supreme Court of India recognised the right to privacy as a fundamental right under the Constitution of India in the landmark case of KS Puttaswamy vs Union of India. The Supreme Court recognised the need for a comprehensive data protection legislation in India. 2018 was a landmark year with the publication of the report entitled “A Free and Fair Digital Economy Protecting Privacy, Empowering Indians”, released by the Committee of Experts under the Chairmanship of Justice B.N. Srikrishna. Other efforts include the Personal Data Protection Bill, 2019, introduced in Parliament (which was a revised version of the PDP Bill, 2018) and the Report of the Joint Committee on the Personal Data Protection Bill, 2019 (JPC Report). The JPC also released the Data Protection Bill, 2021.
Therefore, the recent attempt in 2022 must be understood in the context of all those previous attempts. This paper maps the divergences in data protection endeavours over the years in India and proposes a way forward while comparing the existing data protection framework under EU GDPR.
31. Harbinja, Edina (Aston University). Regulatory Divergence: The Effects of UK Technology Law Reforms on Data Protection and International Data Transfers
Keywords: digital regulation, data protection adequacy, cross-border data protection, law reform, UK technology law and regulation
Abstract. The paper examines examples of ongoing law reforms in technology law in the UK and the EU, discussing their effects on data protection and commercial data transfers.
This paper examines aspects of this ‘digital law reform package’, and analyses the effects of a specific technology law reform on another area of law reform in the digital sphere. In particular, I question the effects of online safety law reform (the Online Safety Bill) and AI regulation on data protection in the UK and commercial data transfers between the UK and the EU. The focus will be on the most significant proposal in these reforms that may compromise or otherwise impact the UK data protection regime, its adequacy and its relationship with the EU data protection regime. I challenge these proposals based on their divergence and inconsistency and conclude that the technology (and other) law reform needs are approached holistically, or we risk unintended and adverse consequences, not only on the enforceability of these proposals but also on the UK’s data protection adequacy and, notably, data protection and privacy rights of individuals in the UK.
The paper endorses Armstrong’s regulatory alignment/divergence model as a framework. He submits that regulatory alignment/divergence in any given policy area will be a ‘function of the operation and interaction of different modes of governance: hierarchy, competition, co-ordination, networks and community.’
The paper approaches the questions of relevant law reforms comparatively, looking at the proposals in context and action, and assessing their consequences and effects on individuals and society.
32. Harbinja, Edina (Aston University); Edwards, Lilian (Newcastle University) and McVey, Marisa (Queen’s University Belfast). Post-Mortem Privacy and Digital Legacy – A Qualitative Empirical Enquiry
Keywords: post-mortem privacy, digital legacy, digital remains, empirical research, interviews
Abstract. In this qualitative study, we seek to better understand empirically the commercial, technical and legal challenges that arise when considering post-mortem privacy rights in the digital era.
To this end, we interviewed a sample of nineteen legal professionals, civil society and regulators, primarily based in the UK. This has enabled us to gather data and in-depth insights into post-mortem privacy challenges and understand key stakeholders’ perspectives in this area. Emergent themes we discover in our study include: awareness, reform, solutions, platforms/tech/contracts and the limitation of current practice.
Notably, we discover the significant impact of the Covid-19 pandemic on the greater stakeholder interest, practice and development of new business models in this area. Regarding the legal profession, we find that they claim the new area of expertise and attempt to conceptualise the area where there is little law and policy. Moreover, we note the importance of understanding the legal and financial risks that practitioners face due to the uncertainties surrounding the law of digital remains. Our key findings highlight the need for law reform, raising user awareness and improving technological solutions.
We will use taxonomies and findings from the qualitative study to inform a quantitative baseline assessment interested in whether individuals in the UK understand what happens to their personal data after death if they are concerned with developments in this area and whether current data management tools (such as Facebook’s Legacy Contact or Google’s Inactive Account Manager) are helpful.
33. Hariharan, Jeevan (Queen Mary University of London) and Noorda, Hadassa (University of Amsterdam). Imprisoned at Work: The Impact of Employee Monitoring on Physical Privacy and Individual Liberty
Keywords: Employee monitoring, Privacy, Surveillance, Tracking, Human rights and technology
Abstract. In recent years, rapid advances in technology have meant that employee surveillance occurs in new and sophisticated ways. Employees being monitored at work is hardly a new phenomenon. But there are now a wide range of tools available to track worker activity, boasting features like keystroke logging, website monitoring, and video surveillance. The use of such software has become increasingly prevalent, particularly as a result of the pandemic with the surge of people working from home.
This paper understands employee monitoring in terms of the impact it has on the employees’ individual liberty and physical privacy. In particular, it addresses how monitoring affects the physical and spatial components of privacy and connects monitoring with imprisonment.
From a legal perspective, the permissibility of employee monitoring is complex. Under UK and European law, the issue is most commonly analysed through the lens of data protection legislation or compatibility with the right to private life enshrined in Article 8 of the ECHR. However, when one delves into this jurisprudence and academic commentary, it can be observed that monitoring is almost exclusively understood in terms of the impact it has on the employees’ informational privacy. On this logic, the concern with workplace surveillance is that it implicates the employee’s control over information and data. And if informational privacy risks can be appropriately managed, then monitoring can be justified.
Our paper pushes against this dominant narrative. While an individual’s informational privacy is an important part of what is at stake, we argue that there are two reasons why employee monitoring is concerning which go beyond the use of information and data. First, drawing on the broad theoretical literature on privacy, we point out that informational privacy is typically only understood as one aspect of individual privacy. Privacy is generally conceptualised as embracing distinct physical or spatial components as well, which are highly relevant in the monitoring context.
Second, we make the novel claim that, in extreme situations, monitoring can be so intense that it constitutes a form of imprisonment. Imprisonment is the restraint of a person’s liberty, typically in a prison institution with walls and locks. However, as English courts have recently recognised in the context of the false imprisonment tort, physical barriers are not necessary for imprisonment to occur. We argue that some of the monitoring activities reported recently are so constraining of individual liberty as to clearly meet this threshold.
This revised approach to the wrong of employee monitoring has significant legal implications. For one, it brings into sharp focus some of the ways in which the law has developed, and needs to develop further, in order to protect our physical privacy comprehensively. And importantly, it also means that firms have to recalibrate their assessment and justification of monitoring in light of the recognition that workplace surveillance could constitute a form of imprisonment. In particular, an employer could potentially be liable in false imprisonment for extreme monitoring, which, in turn, reframes the legal risks of conducting tracking activities.
34. Higson-Bliss, Laura (Keele University). ‘Will Someone not Think of the Children?’ The Protectionist State and Regulating the ‘Harms’ of the Online World for Young People
Keywords: Legal education, Internet regulation and governance, Communications law and regulation, Young people and the online world
Abstract. Since 2018, following a green paper exploring the regulation of the online world (HM Government, 2018), the Conservative Government in the United Kingdom have continued the rhetoric that they wish to become one of the safest places in the world to go online. To do this, following several white papers and draft bills, the UK government has introduced the Online Safety Bill before Parliament. And despite the emphasis being originally on the regulation of online companies, in particular the likes of social media platforms such as Facebook and Twitter, much of the recent discussions have centred around the protection of children.
The online world has become the modern ‘moral panic’ of the digital world, with parents now more worried about their children online than smoking or drinking (PSHE Association, 2016). The ‘harms’ to young people from the online world are well documented, with barely a week going past without stories emerging in the press of the dangers of the online world for children (see for example: Acres, 2023). In turn, the State takes a protectionist approach as we have seen with the Online Safety Bill. Here, we will regulate or criminalise such behaviours to protect young people. However, what is often missed from these discussions is the voices of these young people we are trying to protect, alongside the positive sides of the online world. Instead, we as adults seem to decide what we believe is harmful to young people and then prohibit such behaviour.
This paper will explore two growing areas of ‘harms’ associated with the online world and young people: (1) sexting and (2) discussions around mental health. It will outline the concerns we as adults and the State have towards these behaviours, before turning to examine how young people view the online world. This paper will reject this traditional protectionist stance. Instead, the paper will argue that a protectionist approach to combatting the ‘harms ‘of the internet will not tackle the underlying causes of such behaviours – social norms and lack of adequate legal, technological, and pastoral education. It will suggest that instead of young people viewing such content on ‘regulated’ sites, such as Facebook and Twitter, echo chambers will instead be created on smaller ‘unregulated’ sites which in the long run will do more harm. The paper will conclude by emphasising the importance of centralising the voices of young people in developing legal, policy and educational responses to online harms and will provide the basis for a future grant application.
35. Hoekstra, Johanna (University of Edinburgh). Online Dispute Resolution and Access to Justice for Business & Human Rights Issues
Keywords: Business & Human Rights, Arbitration, Access to Justice, AI, Online Dispute Resolution
Abstract. The Covid pandemic and advancing technology has increased the use of Online Dispute Resolution (ODR) through, for instance, the use of AI to automate parts of the process which can make the dispute resolution process more efficient and speedier.
This of course raises significant questions on different issues, including on access for justice. ODR can lower some barriers for access to justice such as the parties to the dispute not needing to travel. At the same time, it requires for the parties to have access to good technology and be proficient in its usage. This would especially be important when talking about non-commercial parties.
This paper analyses how ODR affects access to justice in relation to business & human rights arbitration with a focus on victims of corporate human rights abuses. For victims of corporate human rights abuses it can be difficult to access justice because of the power disbalance between the victims and the corporation. Right holders (both as a group or individuals) often have a lack of means. Furthermore, the law itself often forms a barrier in holding corporations accountable. Arbitration and alternative dispute resolution are promoted as an alternative avenue for right holders to obtain justice. This however does also raise questions with regards to feasibility and access to justice.
The first part explains the role arbitration and dispute resolution have with regards to business & human rights issues.. The second part of the paper explores ODR and the use of AI in dispute resolution. The third part examines the issues and opportunities this represents for business & human rights arbitration in relation to access to justice.
36. Hof, Jessica (University of Groningen) and Oden, Petra (Hanze University of Applied Sciences Groningen). Breaches of Data Protection by Design in the Dutch Healthcare Sector: Does Enforcement Improve eHealth?
Keywords: Enforcement, Dutch supervisory authority, Data breaches, Data protection by design, eHealth, Dutch healthcare sector
Abstract. The Dutch healthcare sector processes highly sensitive personal data, including health data. If handled carelessly, this can have a major impact on the fundamental rights and freedoms of natural persons. To provide a consistent level of protection, the General Data Protection Regulation (GDPR) contains obligations for controllers/processors, including data protection by design (prevention), and requires consistent supervision of this by the national supervisory authority (correction).
This paper shows that the Dutch supervisory authority’s (AP, Autoriteit Persoonsgegevens) enforcement of data protection by design in the healthcare sector is currently insufficient to improve data protection in eHealth. It does so by examining data breach notifications and privacy complaints received by the AP since 2018. The paper shows that in spite of the high number of breach notifications and privacy complaints received on the healthcare system, the AP has followed enforcement in only seven cases. A closer look of these case also reveals that the same corrective measures were not imposed for almost the same infringements.
This paper argues that the lack of consistent supervision by the AP has direct consequences for the fundamental rights and freedoms of natural persons: without consistent supervision, there is no incentive to be GDPR-compliant and make eHealth compliant with data protection by design. It argues further that given that cases tend to involve the same type of infringements, it would make sense for the AP to focus more on prevention, by giving information and advice in favour of data protection by design. This will ensure data protection from the start when developing eHealth and prevent data breach notifications and privacy complaints in the future.
37. Holmes, Allison (University of Kent). Becoming ‘Known’: Digital Data Extraction in the Investigation of Offences and its Impact on Victims
Keywords: Surveillance, Privacy, Data extraction, Victims
Abstract. Digital evidence is a key element in the investigative process and its acquisition can be critical to a successful prosecution. While access to an alleged offender’s data falls within the remit of investigative material, there is increasingly a demand to subject victims to intrusive digital examinations. Within the United Kingdom, these demands have been placed on legislative footing with the passage of the Police, Crime, Sentencing and Courts Act 2022, which provides for the examination of ‘electronic devices’, a term which lacks substantive definition within the Act. As such, this provision has the potential to encompass not only traditional communicative devices such as mobile phones and computers, but to expand to further instruments such as Internet of Things devices, thereby greatly increasing the ways in which information about victims can become ‘known’. This paper interrogates the types of data which can be derived through these provisions and the connections it can reveal, through an examination of the devices and the terms and conditions of their use. Such measures subject victims to enhanced scrutiny, reinforcing power disparities between the victim and the state. Using the case study of the policing of sexual offences in England and Wales, this paper examines the lived experiences of victims of offences who have been subjected to these ‘digital strip searches’ and the impact on the ability of these individuals to access justice. It is argued that the requirement for victims to consent to intrusions into their privacy, making access to justice contingent on individuals’ willingness to subject their lives to intrusive surveillance practices, represents a fundamental barrier to participation in the justice system.
38. Jondet, Nicolas (Edinburgh Law School). The Proposed Broadening of the UK’s Copyright Exception for Text and Data Mining: A Predictable, Promising and Pacesetting Endeavour
Keywords: Copyright, Copyright exceptions, Text and data mining, Text and data analysis, Artificial Intelligence, Big Data, Brexit, CDPA, Directive on Copyright in the Digital Single Market 2019/790
Abstract. The UK is looking into reforming its copyright law to expand the exception for text and data mining (TDM). The UK TDM exception, introduced in 2014, was the first of its kind in Europe and implemented policies to promote research and innovation, particularly in the fields of life science and Artificial Intelligence. The new frontier of research is defined by the analysis of vast quantities of copyright-protected works such as academic papers, books, music or TV broadcasts. Prior to being analysed, the protected works needs to be copied on the users’ computers which, in the absence of agreement from the copyright owners, infringes copyright law. In the past decade or so, many countries have introduced new exceptions in their copyright law to allow for TDM even without the agreement of rightholders.
The UK was a trailblazer in Europe by adopting its TDM exception, arguably breaching the EU copyright rules of the time in doing so. This innovation shaped the debate on copyright reform in other European countries and at EU level. Eventually, the EU adopted its own regime for TDM exceptions in 2019. However, this EU regime, though a step in the right direction, was felt by many to be still too complex and restrictive, particularly when compared to the position in the US.
This paper will argue it was predictable that the UK, now that it has exited the EU, would revisit its TDM exception, especially as the policy objectives of promoting research and innovation, highlighted more than a decade ago, are now at the centre of the government’s economic strategy. We will also argue that the proposed changes are promising, as they will generalise and simplify the use of the exception whilst still providing sufficient guarantees to protect the interests of rightholders. Lastly, we will argue that the UK’s position is likely, once again, to force and rethink of EU copyright rules on TDM exceptions and to be a marker for any discussion of changes to international copyright norms.
39. Joshi, Divij (University College London). Abstract – Governing ‘Public’ Digital Infrastructures
Keywords: it governance, architecture is politics, embedded norms
Abstract. Socio-legal scholarship on technology have for long drawn attention to the embedded politics of artefacts, and their role as sites for the explicit articulation of norms and values. Governments are also increasingly recognising that particular technological and organisational configurations of information systems offer affordances for embedding norms and values in the production of particular kinds of social order. In particular, the platform-based architectures which are hierarchical, scalable and configurable or programmable – offer the opportunity to enact ‘governance-by-design’ – to explicitly embed values and norms towards the fulfilment of particular regulatory agendas or to realise other values.
Governments around the world are increasingly attempting to mobilise information infrastructure as mechanisms for governance-by-design, which can enact particular normative values or structures of governance. Among these, the Government of India is attempting to develop and deploy ‘public digital infrastructure’ – consisting of communication protocols, data science infrastructure and platform-based information systems at a wide scale. Already, the ‘stack’ has come to incorporate India’s controversial biometric digital identity system(s) known as ‘Aadhaar’, the open banking APIs and payment systems known as UPI, a personal data governance scheme called the ‘Data Empowerment and Protection Architecture’, and most recently, the National Digital Health Mission.
This paper will study the implications of ‘public digital infrastructures’ for the law and governance of information systems. Do public digital infrastructures offer a more democratic alternative to information governance, and under what conditions? How is this distinct from current processes for creating and governing infrastructure with reference to distinct normative frameworks (such as security, human rights, sovereign law, ‘freedom’)? What regulatory and governance paradigms does the ‘public’ nature of these infrastructures invoke (eg. public law duties, local democratic forums)? What are the possibilities and limitations of current approaches to information governance in ensuring that important values like privacy, dignity and equality are protected in their creation?
40. Kalsi, Monique (University of Groningen). Understanding the Scope of Data Controllers’ Responsibility to Implement Data Protection by Design and by Default Obligations
Keywords: Data Protection by Design and by Default, Privacy by Design, Responsibility of Data Controllers, Digital Value Chains
Abstract. Since its introduction in 1995, Privacy by Design (PbD) is widely recognized as an essential component of fundamental privacy protection. However, PbD has remained a voluntary compliance initiative without any means to ensure its effective implementation. Article 25 of the General Data Protection Regulation (GDPR) codifies the PbD approach as a legal obligation under which all technologies processing personal data are required to follow Data Protection by Design and by Default (DPbDD). However, obligations resulting under this Article are only binding on data controllers which considerably limits the material scope of the legal obligations. For instance, the design and manufacturing stage of technologies may not coincide with the stage when the data controller comes into the digital value chain. This implies that the burden of implementing DPbDD is essentially on the users of technology, and not on its designers. This leads to the question of to what extent can we talk about protection by design if stages like product development and innovation are excluded.
In this work, we assess the key motivation behind the legislative choices with regard to the personal scope of Article 25. Using a holistic interpretation of Article 25 in light of other provisions of the GDPR, we discuss whether the DPbDD approach is more restrictive in comparison to the original PbD approach. We further argue that other provisions of the GDPR allow for the possibility, albeit not direct, to influence the design phase of technologies. However, we found that it remains unclear whether this possibility ensures a co-division of responsibility between controllers and other actors involved in the digital value chain. We propose to resolve this unclarity by looking at the field of corporate supply chain due diligence, particularly regarding the due diligence obligations and responsibility of mother companies for actions of their subsidiaries and business relationships.
41. Kamara, Irene (Tilburg Institute for Law, Technology, and Society). The Jigsaw Puzzle of the EU Cybersecurity Law: Critical Reflections Following the Reform of the Network and Information Security Directive and the Proposed Cyber Resilience Act
Keywords: cyber resilience act, cyber security act, cybercrime, network and information security directive
Abstract. In December 2022, the reformed Network and Information Security Directive was published (Directive (EU) 2022/2555), replacing the 2016 NISD1 (Directive 1148/2016). The reform of the Network and Information Security Directive does not appear in a legal vacuum and is far from a standalone EU regulatory effort in the field of cybersecurity. Back in 2002, the EU’s Cybersecurity Strategy for the Digital Decade had already stressed the absence of EU collective situational awareness of cyber threats, despite the dependence of many of the critical sectors such as transport, energy, health, and telecommunications on network and information security. The annual Europol Internet Organised Crime Assessments are increasingly reporting the rise of cybercrime-as-a-service, improvements in the modus-operandi and sophistication of malware operators, and the overall increase in cybercrime opportunities.
However, it is true that the past years, the EU legislator has intensified the legislative activity, in what has been characterised as an “actification” of the regulation of new technologies (Papakonstantnou, De Hert 2022). Following the 2013 Cybercrime Directive (Directive 2013/40/EU) and NISD1, which was the first EU-wide horizontal cyber security law [Markopoulou et al. 2019], the Cybersecurity Act was published in 2019 [CSA], a proposal for a Regulation for high level of cybersecurity in EU institutions and agencies was published in March 2022 [COM(2022) 122 final] and a new legislative proposal on the European Cyber Resilience Act was published in September 2022 [COM(2022) 454 final; CRA].
Against this background this article puts together the jigsaw pieces of what emerges to be the EU cybersecurity legal framework, by taking as a reference point the NISD2, and its interaction with the EU legislative initiatives in the field of cybersecurity law. The contribution has a dual aim. The first aim is to analyse and illuminate the key aspects and changes of the NIS Directive reform. The second aim is to embed the NIS reform in the broader EU cybersecurity legal and policy framework and assess the comprehensiveness of the approach of the EU legislator. The analysis is based on doctrinal legal method, with the aim to offer an understanding of the rationales, design, and main provisions of the Network and Information Security Directive, and other instruments of EU cybersecurity law, and identify challenging aspects of the EU cybersecurity law in achieving its goals (Mæhle, 2017). The doctrinal analysis is complemented by literature review interpreting and providing the context, interpretations, and rationales in relation to the goals of this article. As regards the recently published NISD2 and the CRA Proposal, the article relies on an ex-ante evaluation approach, focusing on the expected effects of the reformed and proposed laws respectively (Verschuuren and van Gestel, 2009).
42. Keese, Nina (European Parliament) and Leiser, Mark (Vrije Universiteit Amsterdam). Freedom of Thought in the Digital Age: Online Manipulation and Article 9 ECHR
Keywords: Manipulation, Freedom of thought, Digital technologies, Online manipulation, Human rights, Article 9 ECHR
Abstract. From advertising targeting users based on analysis of their data to take advantage of identifiable vulnerabilities to disinformation and manipulative design techniques embedded in user interfaces, attempts to manipulate users for the benefit of commercial or political actors are rife in cyberspace. Responses under human rights law focus on the impact of manipulation on different rights, including privacy and data protection, but fail to provide a holistic framework. Regulators emphasise the need to comply with business obligations in the General Data Protection Regulation (GDPR) or the EU’s consumer law acquis. Because online manipulation can interfere with our mental autonomy, manipulative techniques like computational propaganda or micro-targeted advertising can compromise the right to freedom of thought under Article 9 ECHR. Accordingly, this article sets out how courts can apply the right to freedom of thought to constrain online techniques designed to manipulate users, adding a new tool to the arsenal which regulators can use in the fight against online manipulation.
43. Kilkenny, Cormac (Dublin City University). Remediating Rug-pulls: Examining Private Law’s Response to Crypto Asset Fraud
Keywords: Crypto assets, Rug-pull, Contract law, Tort law
Abstract. Crypto asset investors have suffered significant financial harm due to rug-pulls, having accumulated losses of $2.8 billion because of the issue in 2021 (Chainalysis 2022). A rug-pull involves the creation of a crypto asset project which is then promoted by its creators using misleading claims, making promises to investors which are not ultimately fulfilled. These projects include the exchange of cryptocurrencies and non-fungible tokens (NFTs). As crypto assets become more popular, so too do fraudulent activities of this nature that capitalise off investors’ willingness to purchase novel crypto assets. As a result of a rug-pull, investors risk losing all invested funds.
These crypto projects are often accompanied by a whitepaper which outlines the purpose and the unique selling point of the cryptocurrency or NFT. The promises contained in this document may be used to mislead the investor. In this sense, the administrators of the crypto asset project use deceptive marketing practices, promising investors an array of benefits they do not intend to provide. Promoters may be recruited to further attract investors by informing them of the exciting opportunities that the project provides. When the price of the asset reaches a sufficient value, and enough investors have bought into the project, the creators abandon the project without delivering on any of the promises made and withhold investors’ funds.
The purpose of this paper is to determine whether crypto asset investors can be remediated for losses caused as a result of rug-pulls, by applying the rules and principles of common law. In particular, it examines the ability of contract and tort law to govern transactions which are affected by a rug-pull. Firstly, in relation to the contract law dimension, it determines whether the whitepapers used to entice investors can amount to binding agreements that may form the basis of an investor’s claim. It then examines the use of smart contracts that facilitate the exchange of money for crypto assets between investors and project creators, to determine whether or not smart contracts can be breached as a result of rug-pulls. Following the contract law analysis, this paper examines whether tort law, particularly the tort of deceit, may be used by investors seeking remediation for losses suffered. In doing so, it examines instances of rug-pulls where the project creators attract interest by making promises that are ultimately not fulfilled.
44. Krokida, Zoi (University of Stirling). The EU Right of Communication to the Public against Creativity in the Digital World: A Conflict at the Crossroads?
Keywords: EU right of communication to the public, online creativity, primary liability rules for online content sharing service providers, fair balance
Abstract. This article discusses the EU right of communication to the public and its application in the digital word. More specifically, it argues that creativity in the EU Digital Single Market may be restricted because of the overstretched interpretation of the right of communication to the public in the context of linking activities and because of the imposition of primary liability rules online content sharing service providers. To do so, firstly, I critically examine the requirements set forth by the CJEU in order to justify the unauthorized communication to the public in respect of linking. Secondly, I address the ascription of primary liability rules to online platforms through the act of communication to the public, as set forth in Article 17 of the Digital Copyright Directive.Then, the discussion moves to the implications for creativity, namely for online platforms as spaces that promote creativity, for internet users who access artistic works, and for the creators who are economically engaged with the online platforms.
As a way forward, a return to the doctrine of fair balance is suggested. The use of that doctrine has already been outlined in a bedrock of case law within the copyright context over the years and its aim is to find an equilibrium amongst the conflicting interests of the parties involved in a dispute or at the setting of a legislative tool. As the CJEU has noted in Scarlet v Sabam, “national authorities and courts must strike a fair balance between the protection of copyright and the protection of the fundamental rights of individuals who are affected by such measures”.[1] In that way, legal certainty and the rule of law would be safeguarded while the equilibrium of the fundamental rights of the parties involved would be ensured. Otherwise, there is a risk that the right of communication to the public will place creativity online in peril.
45. Lazcano, Israel Cedillo (Universidad de las Américas Puebla (UDLAP)). DevOps and the Regulation of the “Invisible Mind” of the Digital Commercial Society
Keywords: DevOps, DevSecOps, Regulation, Artificial Intelligence, Machine Learning, Operational Resilience
Abstract. In the context of the emergence and diffusion of the idea of the Commercial Society, Adam Smith argued that, under the Law, every individual is a potential merchant and -consequently- the interaction among the different activities and interests developed within an economy by each individual and company would constitute the famous concept of the “invisible hand”. In the context of the Fourth Industrial Revolution, we are witnessing the emergence of what I call a Digital Commercial Society in which every individual with access to an electronic device can become a merchant; however, the interactions that configured our “invisible hand” are being complemented by the introduction of algorithms that could be labelled as the “invisible minds” of our economies. To illustrate this point, I am going to employ the case of the introduction of DevOps in companies, such as, commercial banks. From a starting point, DevOps (Development and Operations) can be defined as a governance model by which those people, processes and technologies that developed their tasks as isolated units within a company are fused together to offer -through a single pipeline- constant improvements in the internal technological infrastructure and the solutions offered to their customers. Normally, this pipeline is segmented in four main phases: 1) planning, 2) development, 3) delivery, and 4) operations. Certainly, this new paradigm offers a substantial amount of advantages and benefits for both companies and customers. Yet, in this paper, I will argue that the automatization of DevOps through the introduction of Artificial Intelligence could present challenges for regulators, companies and customers, particularly if we are deploying new software in regulated environments rapidly, based on the trust that we have on a set of algorithms developed by the company within the internal pipeline. The problem lies when you have to face regulatory scrutiny or face adverse scenarios that cannot be forecasted accurately by an algorithm that was produced building on open sourced software (a financial crisis would be a good example); consequently, we have to foster the regulated development of DevSecOps (DevOps that introduce in their processes Security measures) around the principles of operational resilience found in standards and principles like those found in the CMMC 2.0. and the Principles for Operational Resilience developed by the Bank for International Settlements, that will allow companies to preserve their systemic resilience. As we can infer, DevOps are becoming the invisible mind of our economies, and we have to be sure that this mind is adjusted to the interests of our Digital Commercial Society under the Rule of Law developed by those human interactions accurately described by authors like Wesley Hohfeld.
46. Leiser, Mark (Vrije Universiteit Amsterdam); Santos, Cristiana (Utrecht University) and Doshi, Kosha (Symbiosis Law School). Regulating Dark Patterns across the Spectrum of Visibility
Keywords: dark patterns, manipulative design, digital design acquis, regulation, law, user interface, data protection, consumer protection, HCI
Abstract. Dark patterns are commonly used to describe manipulative techniques implemented into the user interface of websites and apps that lead users to make choices or decisions they would not have otherwise taken. Legal academic (Leiser, de Conca, Santos & others) and policy (OECD, EU Commission) work has focussed on establishing classifications, definitions of dark patterns, constitutive elements and typologies of dark patterns across different fields. Regulators (UK CMA, FTC, NL ACM and several data protection authorities) have responded to this issue with a number of enforcement decisions related to data protection and privacy violations, as well as rulings protecting consumers. But how dark patterns are made visible in enforcement is still underreported, especially when regulators are the ultimate decision-makers.
Our analysis of case law found that dark patterns can be classified into two categories: user interface techniques and non-user interface techniques. The most common user interface-based techniques that have been sanctioned by regulators include preselected choices, complicated refusal mechanisms, the prominence of certain choices, and bundling practices. Non-user interface techniques include informational practices, such as unclear language, the absence or lack of accessibility of information, and also system architecture practices, which refer to manipulative techniques at the code level that are invisible to users but can be identified through technical means. From a regulatory perspective, the majority of enforcement decisions across the EU primarily deal with visible dark patterns such as preselection, forced continuity, confirm shaming, roach motel and hidden information practices. The administrative or judicial authorities have dealt with these visible dark patterns through the lens of the GDPR, the ePrivacy Directive, National Consumer Codes, and the Unfair Commercial Practices Directive.
Our paper conducts an analysis of regulatory oversight in regard to dark patterns in digital tech. The European Union’s recent approach to regulating dark patterns (the GDPR, reform of the e-Privacy Directive, the Digital Services Act, the Digital Markets Act, the Data Act proposal, the Proposal for an Artificial Intelligence Act alongside the New Deal for Consumers and suggestions of a new Digital Fairness Act) are critiqued against the EU’s digital design acquis. Part 1 of our paper analyses these decisions, concluding that the focus of regulation has primarily been on visible dark patterns in the user interface (including user experience and textual statements). Part 2 of our paper analyses the European Union’s recent and forthcoming regulatory approaches, concluding that the GDPR is sufficient to address darker patterns typically found in third-party sharing agreements and privacy policies but has not adequately addressed the darkest patterns found in System Architecture. In the final part of our paper, we further critique the suitability of the current regulatory regime, concluding that digital design acquis in its current form is insufficient to regulate dark patterns across the entire spectrum of visibility.
47. Li, Wenlong (University of Birmingham) and Chen, Jiahong (University of Sheffield). Understanding the Evolution of China’s Personal Information Protection Law: The Theory of Gravity Assist
Keywords: Personal Information Protection Law, GDPR, China, Brussels Effect, Gravity Assist, Digital Sovereignty
Abstract. The discussion of China’s personal information protection law in English-speaking communities is often disassociated with the developments in Europe and other western countries. As China holds a radically distinct political regime, it is widely held that its domestic legislative developments are rather different. On the other hand, some commentators have compared the developments in China to those in Europe, notably in the EU with regard to the influential GDPR, pointing out the similarities between the two laws. One tempting way of understanding the Chinese developments in this area is to apply the theory of Brussels Effects. However, what is lacking in the literature is a historically- and policy-oriented approach to the nuance of the case of China. This paper seeks to fill this gap by fostering a better conceptual frame to understand China’s personal information protection law. We develop and substantiate the concept of gravity assist, a term primarily used in cosmic contexts, to illustrate several waves of legislative attempts. We argue, the new personal information protection law is an attempt to leverage the political appeal and technical maturity of the GDPR to achieve a number of longer-term policy goals. We hold this concept with notable descriptive power to (1) explain why it takes almost two decades and multiple attempts for China to enact a GDPR-equivalent framework; (2) characterise the continuing influence of and the interaction with the political agenda of cyber sovereignty; and (3) predict how China’s personal information law might evolve with the second-mover advantage in the future in light of the economic (Data Foundational Regime) and political (digital sovereignty) agendas.
We argue that, with second-mover advantage, China’s PIPL does represent a sort of the Brussels Effect in the unlikely territory in the East. However, the way in which China learns and follows the EU is inherently instrumental and superficial at epistemic level, replicating the regulatory structures and key concepts that have withered in several decades of enforcement and thus useful to China’s enactment. The very core of EU data protection law, especially in terms of independent oversight, is however left behind when China transplants the GDPR into its domestic personal information law. While China does not value highly of the fundamental rights, it is imperative that legislators respond the far-reaching and now-tangible implications of commercial and political surveillance that now have tangible and immediate implications. Going forward, the Chinese personal information protection regime is more likely to be aligned with countries (e.g. the UK) that are explicitly that explicitly eager to rebalance data protection and some economic imperatives with a view to boosting data-driven innovation and digital economy. This does not mean that China will no longer follow suit, at least superficially or instrumentally, in other legislative agenda. But China’s law-making has come to a phase when personal information protection, as a non-essential value in Chinese regulatory landscape, is to be rebalanced with other political and economic ones. In this regard, the concept of gravity assist is helpful, we argue, as a supplementary or alternative frame to the Brussels Effect.
48. Maguire, Rachel (Royal Holloway, University of London). Copyright and Online Creativity: Web3 to the Rescue?
Keywords: copyright, NFTs, non-fungible tokens, creativity, user-generated content, Web3, reform
Abstract. The creative internet has been well-explored in copyright scholarship, with much focusing on Web 2.0 and the read-write culture that it facilitates. Web 2.0 was supposed to democratise access to creative activity and creative markets, providing those outside of the traditional channels with the ability to produce and distribute creative content. However, online creators have often faced problems in practice because of copyright law, with calls for law reform being widespread.
Narratives about opportunities offered by Web3 – a vision of an improved internet based on decentralised systems – seem to run parallel to those of Web 2.0, with talk of wealth-equalisation and entrepreneurial disruption. Web3’s relationship with copyright law, however, seems to be different. This is, in part, because decentralised systems are often seen as operating beyond the scope of law and regulation.
This paper interrogates whether Web3’s relationship with copyright law better meets the needs of online creators. It considers specifically whether Web3 resolves problems for copyright law in the existing online creative environment. To do so, it focuses on an element of Web3 that has received widespread application and attention: non-fungible tokens (NFTs).
Drawing in part from qualitative work into online creative communities, the paper identifies that amateur creators increasingly use online social spaces as routes to professionalism, including distribution, marketing, training and feedback. However, these ‘amateur-but-aspiring online creators’ face several difficulties in taking advantage of the opportunities available to them in Web 2.0. These include difficulties leveraging rights in their works due to limited legal knowledge and advice and the ‘right-click problem’, and the disconnect between the control over works provided by copyright and that wanted by creators.
It then argues that while NFTs theoretically offer solutions to some of these problems – for example, by providing a means to create scarcity in an online context or by challenging the need for copyright law at all – in practice, NFTs seem to replicate or reinforce the copyright issues these creators face in Web 2.0. As such, we need to be realistic about the claims made in support of Web3; the need for copyright reform to better reflect the modern creative environment remains.
49. Mangan, David (Maynooth University). From the Workplace to the Workforce: Monitoring Workers in the EU
Keywords: privacy, workplace, workforce, business interests, surveillance
Abstract. Workplace surveillance has existed in some form for a lengthy period. In the early 21st century, there has been a change. Surveillance has moved from being of the workplace, to being of the workforce. The distinction is between the orthodox fixed location of work and the broad capture area of 21st century surveillance technologies. Digitalisation of work has not only extended the scope of the managerial gaze, but it has also expanded the type of information collected. The spectrum includes: workers’ conduct at the place of business as well as their off-duty activities, existing simultaneously in the physical and online spaces. Monitoring of online activities offers a range of data about individuals that may not be easily gleaned from common workplace interactions. Restrictions on the data aspects of employers’ surveillance have not been as significantly addressed in case law at this point. Instead, adjudication of employers’ monitoring activities have largely focused on video surveillance, as well as an increasing number of decisions related to social media postings, and some on email and internet use. In many decisions, the monitoring itself has not been challenged because employers’ authority (pursuant to contract or management rights clauses) to monitor workers has been accepted, or the impugned activity has been voluntarily posted online by workers and drawn to employers’ attention. Surveillance, then, has become a more complicated and layered topic as a result of advances in information technology.
This presentation considers the reasons for employers deploying surveillance. Largely, it is based upon security and safety concerns; though these premises offer a broad scope with a wide capture area. The chapter then considers the approach to surveillance in European Member States, which is premised on the European Convention on Human Rights (ECHR) as well as the European Union Charter of Fundamental Rights (CFR). These examples illustrate that the case law of the jurisdictions under study remains somewhat tentative when it comes to substantive engagement with a worker’s right to privacy in relation to an employer’s surveillance activities. Instead, there has been reliance upon a procedural approach premised upon notification of surveillance activities (as well as any repercussions in response to actions observed during monitoring). It is asserted that a procedural approach to privacy at work falls short of giving effect to the rights outlined in EU law, and that there must be some movement at the European courts level on the parameters of a substantive right to privacy for workers.
50. Manwaring, Kayleen (UNSW). Repairing and Sustaining the Third Wave of Computing
Keywords: Right to repair, Internet of Things, Electronic devices
Abstract. The last decade has seen a significant increase in new product lines in ‘smart’ consumer goods and systems manufactured and released in the wake of technological developments allowing for many previously ‘dumb’ objects, buildings, environments and living things to be computerised and connected to the Internet. These new products will inevitably develop defects that need to be addressed, and are likely to add substantially to the ever-growing problem of e-waste. This paper examines the recommendations of the recent Australian Productivity Commission (PC) Inquiry regarding the ‘right to repair’ through the lens of cyber-physical devices and systems, such as the Internet of Things.
A stronger right to repair for independent repairers (individual, community or commercial) particularly in the context of these new types of consumer products, would assist in achieving Australia’s recently-announced goals relating to the creation of a circular economy in order to promote sustainable consumption and reduce e-waste. The PC Inquiry has produced some recommendations that will be useful to strengthen the right to repair in Australia. However, these recommendations, while welcome, also contain some significant gaps in relation to promoting sustainable consumption by consumers.
In this paper, I analyse the recommendations arising out of the PC Inquiry in the light of the 2015 United Nations Consumer Protection Guidelines relating to sustainability, durability and reliability. I identify important gaps in the PC’s approach and recommendations, which may be helpful not only for Australian policymakers but other jurisdictions who are facing sustainability challenges in relation to electronic devices and e-waste.
51. Mapp, Maureen (University of Birmingham). Private Crypto Asset Regulation in Africa – A Kaleidoscope of Legislative and Policy Problems
Keywords: Web3, Cryptocurrency law, Africa
Abstract. Amidst concerns about a long ‘crypto winter’, Africa has continued to register growth in crypto assets with countries like Kenya, Nigeria, and South Arica leading the drive in crypto adoption. While most of the transactions are relatively small (under $1000 per transaction) they represent 80% more than retail crypto payments in the rest of the world. With the use of newer ecosystems like the DeFi that facilitates direct peer to peer exchanges, Africa could well be the next frontier for crypto currencies including on Web3- the future iteration of the internet that operates on the blockchain technology while supporting cryptocurrency transactions. Even so, regulating this space will remain a challenge if regulators are unclear on their regulatory priorities. The quandary appears to be whether to promote innovation through a permissive regime that is protective of consumers or ban the use and trade in cryptocurrencies altogether.
In this regard, the regulation of private crypto currencies in Africa remains a kaleidoscope of regulatory approaches. Since the mid-2000s when South Africa issued its first warning against the use of virtual currencies, countries have introduced a range of measures including legalising the use of Bitcoin (Central African Republic, 2022); launching their own central bank digital currencies (Nigeria, 2021); and adopting existing legislation like foreign exchange controls to regulate cryptocurrency use. Some countries have even banned their use (like Morocco) or regulate trade via a series of policy directives. Binance platform, for example, bowed to a policy directive from the Central Bank of Uganda (2022) to suspend withdrawal and deposit services that used mobile money payment systems to convert cryptocurrency to fiat shillings. The regulatory position is less clear when two or more approaches are combined.
On the face of it, these approaches indicate that African regulators are bringing cryptocurrency use and trade in line with the law. On closer scrutiny, however, policy and legal contradictions have blunted the effectiveness of the measures. Two areas of concern are flagged. First, cryptocurrencies need a legal framework to operate in. The asset must first be classified and given legal meaning before its provision can be regulated, but most countries lack such a framework, making it difficult for regulators to licence crypto businesses and promote consumer protection. A second point is that even where legislative arrangements exist, they sometimes clash with policy directives that eschew the exchange and trade in cryptocurrency. In effect, one arm of government promises an enabling climate but the other rejects it. This lack of coherence within law and policy suggests the absence of an appropriate normative standard to support an enabling legal framework.
Using the United Nations Kampala Declaration on Fundamental Principles on the regulation of cryptocurrencies and the Blockchain (Digital Ledger Technologies) as a yardstick, this paper explores the extent to which African states can develop a coherent set of rules and values that accommodates rather than rejects the distinctive characteristics of cryptocurrencies.
52. Margoni, Thomas (CiTiP); Quintais, Joao (University of Amsterdam) and Schwemer, Sebastian (Centre for Information and Innovation Law (CIIR), University of Copenhagen). Algorithmic Propagation: Do Property Rights in Data Increase Bias in Content Moderation?
Keywords: Copyright, Bias, Content Moderation, Platfomrs, CDSM, Digital Service Act, AI Act
Abstract. Our paper offers a reflection on the topic of algorithmic content moderation and bias mitigation measures, with an emphasis on European copyright law and platform regulation. It explores the possible links between conditional data access regimes and content moderation performed through data-intensive technologies such as fingerprinting and, within the realm of artificial intelligence (AI), machine learning (ML) algorithms. Since systems used for content moderation are regularly trained on copyright-protected datasets this aspect is relevant not only for the moderation of copyright content but moderation of a large variety of information. Do current EU data access rules (as outlined below) have the effect of influencing, or even favoring, the propagation of bias and errors present in the input data to the algorithmic tools employed for content moderation by online platforms?
Within the broad field of EU data access rules, we focus mainly on copyright and related rights given their acquired ability to regulate certain “input data” and therefore to determine their availability. This regulation takes place not only through the relatively recent imposition of rules on the permissibility of text and data mining (TDM) required to develop AI systems (e.g., Arts. 3&4 CDSM), but also at the level of what type of information is required by law to trigger copyright content moderation liability for online platforms (as for instance specified in Art. 17 CDSM). Furthermore, we also look at recent legislative developments which, by regulating data access regimes in specific circumstances (e.g., digital services, AI, IoT, etc.), may play an interesting and yet unexplored role in data availability for training purposes (e.g., DSA, AI Act, Data Act).
Our analysis contextualizes the dynamics of “bias propagation” in the context of online platforms in relation to the obligations stemming from Article 17 of the CDSM and the more recent due diligence obligations in the DSA. Article 17 CDSM regulates online content-sharing service providers (OCSSPs) through a complex set of rules. The provision states that OCSSPs carry out acts of communication to the public when they give access to works or other subject matter uploaded by their users, making them directly liable for their users’ uploads. At the same time, the provision includes a liability exemption mechanism in paragraph (4), as well as several mitigation measures and safeguards in paragraphs (5) to (9). However, the same article mandates that the measures employed to avoid liability cannot compress the fundamental rights of users who have uploaded lawful content, as recently confirmed by the Court of Justice of the European Union (CJEU) in case C-401/19. Therefore, to concurrently offer effective protection to these user rights, traditional filtering technology may not suffice. Our hypothesis is that ML algorithms, given their alleged superiority in identifying (understanding?) contextual uses, may at this point be increasingly employed for copyright content moderation.
Against this background, a crucial question emerges for the future of online content moderation and fundamental rights in the EU: what happens when these tools are based on “biased” datasets? More specifically, if it is plausible that any bias, errors or inaccuracies present in the original datasets be carried over in some form onto the filtering tools developed on those data: (1) How do property rights in data influence this “bias carry-over effect”? and (2) what measure (transparency, verifiability, replicability, etc.) can and should be adopted to mitigate this undesirable effect in copyright content moderation in order to ensure an effective protection of fundamental rights?
53. Marquez Daly, Anna Helena (University of Groningen). Innovation & Law: Encouraging Lovers or Bitter Nemesis?
Keywords: Commons arrangements, innovation, regulation
Abstract. Have you ever wondered what kind of a relationship law and innovation are in? Are they friendly colleagues, bitter nemesis, or encouraging lovers? Do they foster one another, neutralize each other, or even annihilate each other?
All of these perspectives have been discussed in the past. Some scholars argued that law increases
compliance costs, therefore reducing innovations. Some researchers have argued that fields such as
intellectual property rights, copyrights and patents incentivize innovation by granting protection. Some experts have even argued that laws which offer a collaborative space can boost innovation.
This last argument is certainly worthy of exploration. Authors and scholars, such as Lawrence Lessig and Yochai Benkler, have identified the value of collaboration for innovation, especially in digital spaces. They have studied and researched how spaces such as the internet, which allows for a “build-upon” environment, have enriched the landscape with innumerable goods, services, and other valuable features. In fact, they have even gone as far as describing the internet as an “innovation commons”, where distinct individuals can come together at different times and build something symbiotically. How is it possible that openness, rather than protection, leads to such innovation-booms? Carol Rose may provide us with some insights. According to her research, under certain circumstances assets or resources are valuable because they are open. Hence, under these circumstances’ openness and participation lead to positive externalities.
This paper proposes an explorative perspective on how openness, a key component of the concept of a
“commons”, impacts innovation. More specifically, it intends to explore under what circumstances and in what context such positive externalities could be derived from keeping a resource open and in what ways such openness could impact innovation. Such discussion could illuminate how we can best frame innovation and how best to nurture it in the digital environment.
54. Mathur, Sahil (The Open University). Digital Inequalities and Risks – Perspectives from FinTech
Keywords: FinTech, Digital Financial Services, Risks, India, Digital Inequality, Socioeconomic Characteristics
Abstract. Traditional banks are facing stern competition from FinTech banks that operate digitally. ‘Neo-Banks’, and ‘Challenger Banks’ are but two examples of overlapping terminology for financial services transformed into digital interfaces. The Global Digital Transformation Market is worth $594.5 billion in 2022 and is expected to grow up to $1,548.9 billion in 2027*. However, being a user of FinTech platforms in a digital space can expose individuals to certain risks like data breaches, cybercrimes, and fraud.
The risks may be because of either internal or external factors. Internal factors can include factors such as exploitation by the financial service providers or their employees in form of lack of transparency and understandability of digital contracts; one-sided, unfair, and incomprehensible terms and conditions; unilateral changes in the contracts; non-consensual data collection and fraud by employees. The external factors can include fraud such as phishing scams, Caller ID spoofing, ATM fraud, hacking, privacy violation and data breaches.
Organisations in the financial market are full of resources to transition into the digital world, but individuals may not have such capabilities. Furthermore, such risks may affect individuals differently based on their demographics, especially in developing countries. Developing countries are shifting towards digital finance at a rapid pace but the question that arises is whether they are undertaking sufficient measures to safeguard customers from the risks of FinTech.
This paper examines the risks associated with using digital financial services and questions who might be most vulnerable while using such services. This research is situated using India’s digital financial market as an example. The paper explores the socioeconomic characteristics of FinTech users leading to their vulnerability in digital spaces and asks whether FinTech sectors are doing enough to address digital inequalities.
55. McCullagh, Karen (University of East Anglia). Brexit UK Data Protection – Maintaining Alignment with or Diverging from the EU Standard?
Keywords: Data Protection, Adequacy, Data Protection and Digital Information Bill, Retained EU Law Act, GDPR, Brexit, UK
Abstract. When the UK was preparing to leave the EU and become a third country for EU data protection purposes it opted to maintain alignment with EU data protection laws, including the General Data Protection Regulation (GDPR), and considered a variety of mechanisms to effectuate seamless EU-UK personal data transfers before applying for a GDPR adequacy assessment by the European Commission (“the Commission”). Then, after the adoption of the EU-UK GDPR adequacy decision by the Commission on 28th June 2021, the UK government announced an intention to consult on proposed changes to UK data protection law with a view to modernizing the law to foster and encourage innovation whilst protecting personal data. Following the consultation, the Data Protection and Digital Information Bill (the “DPDI Bill”) was introduced to Parliament on 18th July 2022, with the aim of updating and simplifying the UK’s data protection framework to enable economic growth by “free[ing] up the use of data” to “unleash its value across the economy,” removing barriers and reducing burdens on organisations, to allow the UK to “operate as the world’s data hub” while maintaining high data protection standards so as not to jeopardise the EU-UK adequacy decision.
Mindful of these developments, this chapter has two goals, the first of which is to explain why the UK was initially reluctant to apply for an adequacy decision before outlining how the Commission assesses and monitors adequacy. The second is to outline proposed changes in the Data Protection and Data Information Bill and subsequent developments, to explain whether the UK is seeking to make appropriate use of the white space in the GDPR, and degree of divergence permitted by the EU adequacy assessment of “essential equivalence” or, seeking to diverge from the EU standard of data protection.
It concludes that if implemented in their current form many of the proposed changes in the DPDI Bill are minor and only a few proposed changes would result in significant divergence from the EU GDPR and cause the EU to question either attach conditions to the renewal of the adequacy decision or refuse to renew it in due course.
56. Mendis, Sunimal (Tilburg Institute for Law, Technology, and Society (TILT), Tilburg University, The Netherlands). Fostering Democratic Discourse in the (Digital) Public Sphere: Proposing a Paradigm Shift in EU Online Copyright Enforcement
Keywords: Copyright, Freedom of expression, Platform governance, Democratic discourse, Communicational theory
Abstract. The EU legal framework on copyright enforcement is rooted in a narrow utilitarian-based normative framework that perceives the primary function of copyright law incentivizing content-creation by granting authors and intermediaries the means of obtaining an adequate reward for their investment. Consequently, the preservation of user freedoms to engage with copyright protected in content in socially valuable ways that promote dialogic interaction (e.g. transformative uses) is rendered peripheral to the core economic aim of ensuring fair remuneration to copyright owners. This is discernible in, (1) the framing of legally granted exceptions and limitations to copyright that protect users’ communicative freedoms as something exogenous (and sometimes even antithetical) to copyright’s function; (2) the perception of the role of online content-sharing service providers (OCSSPs) as “Internet police” tasked with the prevention of copyright-infringing on their platforms and the corresponding failure to impose positive enforceable obligations on OCSSPs to protect users’ communicative freedoms. As a result, the current EU legal framework on online copyright enforcement is skewed in favor of protecting the economic interests of copyright owners and often results in the stifling of legitimate discourse on online social media platforms.
Based on Habermas’ theory on communicative rationality and the public sphere, this presentation establishes online social media platforms as an essential infrastructure for online discourse and a core component of the contemporary digital public sphere. Accordingly, although typically owned and administered by private-sector corporate entities, the governance of these platforms should be carried out within the framework of a private-public partnership that is aimed towards the advancement of the public interest objective of fostering robust democratic discourse.
To enable the EU legal framework on online copyright enforcement to achieve this public interest objective, the presentation proposes a re-imagining of the existing legal framework using the communicational (social planning) theory of copyright law (advanced by scholars such as Netanel, Fisher and Elkin Koren) as a normative framework. The communicational theory of copyright, while affirming the role of copyright in preserving the incentives of authors to produce and distribute creative content it envisions the broader function of copyright law as the promotion of the discursive foundations for democratic culture and civic association.
It is foreseen that this paradigm shift could inter alia enable (a) the establishment of users’ communicative freedoms as endogenous to copyright which could in turn permit a more expansive teleological interpretation of copyright exceptions and limitations that are considered to have special importance to democratic discourse (e.g., transformative uses, quotation, criticism, review and parody); (b) provide a theoretical basis for achieving a fairer balance between the fundamental right to the protection of copyright ((Article 17, CFR; Article 1 of Protocol 1 ECHR) and the fundamental rights to freedom of expression (Article 11, CFR) and culture (Article 13, CFR) in applying the General Limitations clause in Article 52(1) of the CFR, and; (c) enable the imposition of positive duties of care on OCSSPs to ensure that copyright is enforced in a manner that promotes democratic discourse.
57. Milkaite, Ingrida (Ghent University). A Children’s Rights Perspective on Privacy and Data Protection in Europe
Keywords: children, data protection, privacy, Europe, GDPR, AI Act, DSA, UNCRC, specific protection, fairness, transparency, best interests of the child, children’s codes, business interests
Abstract. Today’s children grow up surrounded by a variety of digital services and devices. They include smartphones, tablets, ‘connected’ toys and clothes, game consoles, educational software, location and fitness trackers, social media platforms and ‘smart’ home assistants. While the use of various services and devices provides children with numerous opportunities to play, communicate and learn, it also entails exponential collection and processing of children’s personal data from the earliest days of their lives. (1) Extensive processing of children’s personal (and sensitive) data allows for inferences about their daily lives, preferences, interests, social circles, physical activity and health levels, sexuality, education and other aspects of their lives.
As a result, (data) ‘profiles’ of children can be created, and in turn enable commercial targeting of children through marketing and (behavioural) advertising. (2) This way, commercial actors can profit from children’s engagement online. The construction of highly detailed personal profiles from an early age can affect children’s (future) lives and opportunities, potentially excluding children with specific profiles from certain types of education or the refusal to grant specific health insurance policies. (3) Slowly, these developments and the issues associated with the processing of children’s personal data are appearing on policymakers and legislators’ agendas. (4)
Considering recent technological and legal developments affecting children, it is crucial to evaluate how they may affect children’s rights to privacy and data protection. This contribution, therefore, focuses on the European legislative and policy developments in the European Union and the Council of Europe – organisations that now specifically address children’s rights to privacy and data protection in the digital environment through their work. (5) This contribution reports on research into the ways in which children’s privacy and the protection of their personal data can be rethought from a children’s rights-based perspective, rooted in the values and principles of the UN Convention on the Rights of the Child (UNCRC). The processing of children’s data and the selected European privacy and data protection provisions are thus analysed through the lens of the UNCRC general principles (children’s best interests, rights to non-discrimination, development and the right to be heard). This contribution suggests concrete and practical ways to strengthen the protection of children’s privacy and data protection rights in the digital environment. It draws on recent UN guidance documents, EU Primary law, the EU Charter of Fundamental Rights and the recent EU secondary law (particularly the DSA and the proposed AI Act) to provide concrete suggestions on the requirements that States should adopt in the context of balancing children’s privacy and business interests. In this context, it scrutinises the three ‘children’s codes’ so far adopted in the UK, Ireland and the Netherlands, and suggests adapted and additional measures that could also be adopted by other data protection authorities, nationally or on the level of the EU more widely (such as by the EDPB). It particularly stresses the importance of the specific protection of children’s personal data, the principles of fairness and transparency, the ways to apply the best interests of the child in practice, and the potential of data protection impact assessments.
58. Neroni Rezende, Isadora (University of Bologna). The Proposed Regulation to Fight Online Child Sexual Abuse: An Appraisal of Privacy, Data Protection and Criminal Procedural Issues
Keywords: online child sexual abuse, surveillance, privacy, data protection, criminal procedure safeguards, detection orders, GDPR, LED
Abstract. The protection of children in digital environments is a top priority for the EU legislator. Since July 2021, an interim regulation allows service providers to voluntarily derogate from the safeguards set in the e-privacy Directive to fight child sexual abuse online. The European Commission aims to repeal this temporary legislation with a Proposal published in May 2022 (COM(2022) 209 final). This Regulation will impose on service providers obligations to detect, report and remove child sexual abuse contents on their platforms. The Proposal was severely criticised by privacy experts, worried about a weakening of confidentiality safeguards in online communications (i.e., end-to-end encryption). Also, the Proposal will introduce new preventive and investigatory measures (e.g., detection, blocking and removal orders), which will require the cooperation of a wide range of actors at the national and EU level, namely service providers, national judicial authorities and Coordinating authorities (CAs).
Against this backdrop, this contribution will examine whether: 1) the measures established by the Proposal are proportionate in light of the goal to fight against online child abuse, and if so, 2) which privacy, data protection and criminal procedural safeguards should be upheld in the future legislation.
Firstly, the structure of the Proposal will be outlined. Secondly, the proportionality of the prospective legislation will be examined with regard to the principles established in the recent surveillance case law of the European Court of Human Rights (ECtHR, e.g., Big Brother Watch) and the Court of Justice of the European Union (CJEU, e.g., La Quadrature du net).
Thirdly, the provisions of the draft Proposal will be critically assessed from a privacy, data protection and criminal procedure standpoint. On the one hand, privacy and data protection issues will be scrutinized, also taking into account the interplay between different data protection regimes (the GDPR and the Law Enforcement Directive, LED) that is featured in the fight against online child sexual abuse. For instance, it is not clear whether some of the entities foreseen in the Proposal (e.g., CAs, service providers) will qualify as “competent authorities” under the LED and which safeguards they should apply.
On the other hand, detection orders and other investigatory measures will be analysed from a European criminal procedure standpoint. The conditions to issue detection orders will be examined, focusing on the standards of independence of the issuing authority (e.g., prosecutors), and the “evidence of significant risk” requirement (Art. 7(4)(a)). Also, light will be shed on the criminal procedure safeguards that should surround the exercise of investigatory powers by CAs (e.g., right to silence), in light of the so called Engel criteria.
Lastly, the contribution will conclude with some policy recommendations for the prospective legislation.
59. Nottingham, Emma (University of Winchester) and Stockman, Caroline (University of Winchester). Dark Patterns of Cuteness in Children’s Digital Education
Keywords: Dark design, Dark patterns, Data protection, Children
Abstract. A child’s interactions with the online world have a formative effect on their understanding of society (digital or otherwise), and their place within it. However, the online world has not been constructed with children’s interests in mind. Online platforms have largely been designed to maximize profit-making, often through unethical data extraction practices of users, including children. Cute design has become a firm staple of digital software development. It relates to design features in the form of visuals (avatars, colours, menu items etc.), sounds, or other technical features on the user interface, which the human user would interpret as endearing in some form. Specific features have been identified as enhancing the perception of ‘cuteness’, such as avatars with large eyes, childlike voices, and a head tilt. The associations of ‘cuteness’ with vulnerability and powerlessness stimulates the human user’s trust response. In this way, ‘cuteness’ can be operationalised to inspire uncritical acceptance of a technology, given its significant potential to spark feelings of emotional intimacy. Cute design features have been implemented in a variety of applications, for both adults and children, but in consumer design for children in particular, ‘cuteness’ is often a favoured strategy to influence their intended product purchases and use.
This paper critically discusses ‘cuteness’ in digital learning apps as a potential dark pattern. Research on the concept of dark patterns is still marked by some conceptual inconsistency and varying definitions. There is limited discussion relating to children, especially in the context of their learning and education. Yet this has become an even more prevalent issue since the Covid-19 lockdowns, which saw a stark increase in the use of digital learning apps due to school learning being moved online. This paper examines the concept of ‘dark design’ in the context of children’s education, through an exploration of 20 popular learning apps, as recommended by The Guardian, to keep children meaningfully engaged during the first UK lockdown of March 2020.
60. Orlu, Cyriacus (PhD Candidate, Faculty of Law, Niger Delta University) and Eboibi, Felix (Faculty of Law, Niger Delta University). The Dichotomy of Registration and Operation of Cybercafes under the Nigerian Cybercrime Legal Frameworks
Keywords: Cybercrime, cybercafe, telecom services, cybercriminals, cybercafe operators, Nigeria
Abstract. The development of computer and telecom services in Nigeria resulted in the establishment of cybercafes in all parts of the country. Unfortunately, it became an avenue for cybercriminals to carry out cybercrimes. Due to the anonymity it presents and difficulties of identifying cybercitizens who patronize cybercafes to perpetrate cybercrimes, the Nigerian government enacted cybercrime legal frameworks that mandate cybercafes operators to register the same. While the Advance Fee Fraud & Other Related Offences Act 2006 (AFF) compels cybercafe operators to register with the Economic and Financial Crimes Commission(EFCC), the Cybercrimes (Prohibition, Prevention Etc) Act 2015 insists that registration be done with the Computer Professionals Registration Council. The conflict in both legal frameworks presents confusion about the extant provision of law regulating the registration of cybercafes in Nigeria as one of the measures in curbing the perpetration of cybercrimes. Consequently, this paper critically examines both legal frameworks and recommends what law cybercafe operators should obey and act upon by cybercrime prosecutors based on law and practice.
61. O’Sullivan, Kevin (Dublin City University). The Court of Justice Ruling in Poland and Our Filtered Futures: A Disruptive or Diminished Role for Internet User Fundamental Rights?
Keywords: Copyright, IP Law, Fundamental Rights, Filters, EU Law, Freedom of Expression, Internet User Rights
Abstract. In April 2022, the European Court of Justice delivered its long awaited ruling in the case of Poland answering in the affirmative that Article 17 of the Copyright in the Digital Single Market 2019 mandates the use of upload filters to protect online digital copyright. A critical decision as we move into a future of filtered online services, the ruling confirms – in principle – that algorithmic governance of online copyright enforcement is in line with European fundamental rights.
Reflecting on the Court’s existing normative base, it will argue that for key stakeholders, the ruling is difficult to reconcile with the Court’s existing norms surrounding the governance of online digital copyright. For online intermediary service providers, it will be argued they have seen a total abrogation of the protections afforded to them by the Court’s previous case law without a sufficient and convincing justification.
Having a horizontal impact on internet user fundamental rights it will be argued this compounds the insufficient weight placed by the Court on such rights, in particular when it comes to rights of freedom of expression. Rather than being afforded the disruptive role justified by the Court’s existing normative base, it will be argued that the role of such rights has been diminished in the face of algorithmic governance setting a dangerous precedent for all our filtered futures.
62. Oswald, Marion (Northumbria University); Chambers, Luke (Northumbria University) and Paul, Angela (Northumbria University). The Potential of a Framework Using the Concept of ‘Intelligence’ to Govern the Use of Machine Learning in Policing
Keywords: AI, Machine Learning, Policing, Intelligence, Governance
Abstract. Algorithms and machine learning systems deployed within policing are being used to help make some of the most important decisions within our society, including whether someone is a victim of modern slavery, a child at risk of harm or a potential perpetrator of crime. Outputs of these systems may be one factor put forward to satisfy a legal test (such as having ‘reasonable grounds’ for suspicion), or they may be used to triage, prioritise, predict and manage data overload. Such determinations can impact a person’s future, their progress through the criminal justice system, and the deployment of limited police resources. However, police bodies need both knowledge and decision-support frameworks to determine whether or not they should rely on a machine learning model to help them make an operational decision that could impact individual rights. Such frameworks do not exist in any satisfactory form in respect of policing AI. It is crucial that assessment, oversight and governance mechanisms incorporate context-specific frameworks that allow critique by subject matter experts, highlighting potential errors and uncertainties.
Our research is focused on investigating and building such frameworks. We propose the conceptualising of data-driven outputs in policing and national security as ‘intelligence’ in order to improve understanding of efficacy and to make governance more effective. This paper outlines our new matrix method of evaluating AI in policing, based on lessons from existing processes designed to assess the reliability and certainty of intelligence, when and how it should be used in decision-making, when it should be ignored, and therefore whether its use is likely to be fair and proportionate. We summarise the feedback to date of policing stakeholders on our proposals, and our initial conclusions from testing its evaluation potential with a ‘real-life’ policing case-study.
63. Paolucci, Frederica (Bocconi University). Digital Constitutionalism to the Test of the Smart Identity
Keywords: face recognition, AI, fundamental rights
Abstract. Which memory will humanity have of us? This question has haunted the human being of every era, not least the contemporary one. In fact, in the society of the algorithm (O. Pollicino, G. De Gregorio, 2021), individuals constantly leave traces of themselves that go to make up their real and digital identities. Every track is collected, processed, archived, and stored in immense databases, often built to create one’s digital twin, a digital self. This bit-copy of the physical world constitutes a huge – diffuse – memory of the past, present, and sometimes the future, capable of recalling even remote aspects of our identity. Digital identity must, therefore, be understood as the set of information and electronic codes provided by a user to an electronic system.
The digital identity must paradoxically be saved from itself since every piece of data entered into the digital world is destined to wander indefinitely in the immaterial universe of the Internet, always being able to be retrieved even if formally deleted. Therefore, as will be shown, the effective deletion of data by machine learning systems is not a straightforward objective for both technical and legal reasons. The problem seems to be twofold and threatens data subjects and controllers. Then, as it is known, art. 17 of EU Reg. 2016/679, provides for the possibility for the data subject to request the deletion of their data from the data controller, in practice, the current implementation of this provision, in the context of artificial intelligence, is critical. This problematic issue will be analyzed with particular reference to facial recognition, a technology that even in the light of the writing of the AI Act, appears to be particularly critical for the fundamental rights of individuals (Raffiotta, Baroni, 2022), profiling the risk of creating ‘a centralized memory (i.e., biometric and otherwise) of humanity’ (Calvino, 1968).
The law has become increasingly interested in issues related to algorithmic biases and decisions, particularly from the perspectives of the collection, use, and processing of personal data. The complex constellation of fundamental rights challenged by the new technologies is opening the door to an unedited concept of identity, citizenship, and city, shortening the distances between the world of the bits and the world of the atoms. Therefore, to prove the necessity of a solid and by-design procedural tool, it is going to analyze those issues through the lenses of the crisis between AI and fundamental rights, while taking as a meaningful example the difficult enforceability of the right to erasure in the context of the algorithmic society.
As per the methodology, the topic will be approached with a normative method, meaning reasoning around the value of the right to erasure and of the fundamental rights at stake. Moreover, while recognizing that artificial intelligence (such as facial recognition) needs ad hoc regulation, the analysis will first consider the existing regulatory framework to identify any incompatibilities with AI and ascertain the actual need for an additional protective plan.
64. Paul, Angela (Northumbria University). Police Drones and the Possible Human Rights Issues: A Case Study from England and Wales
Keywords: Surveillance, Human Rights, Drones, Policing, Technology
Abstract. This paper stems from my PhD project centred on the use of Unmanned Aerial Vehicles (UAVs), or drones. Here, the main emphasis is placed around the human rights implications related to the use of the technology for law enforcement purposes. Generally, the unique capabilities of drones are highly useful in police operations such as missing people searches, collecting evidence from dangerous situations, and disaster relief efforts. However, the usage of drones for surveillance purposes, in particular, can bring in issues related to privacy and related implications pertaining to unnecessary biases. This paper, therefore, will explore these issues through presenting some key findings from my research involving police forces in England and Wales. There is currently no national policy and guidelines specifically for the deployment of drones by the police, and therefore specific Freedom of Information (FOI) requests were distributed to the concerned police forces, with a view to explore their norms and practices in relation to the use of drones. More specifically, whether their procedures address the possible human rights implications. Lessons from this case study is also expected to provide important insights into the use of related AI technologies in the policing realm.
The existing legislation mainly focuses on physical safety concerns associated with drones, aviation rules, and the drones used by criminals. However, the seamless embedding of drones into the civilian sphere by the police will raise concerns of a lack of transparency with the public. Therefore, the current work stresses the importance of direct research with law enforcement, as policing research should act as a collaboration between law enforcement and academia. Thus, the paper takes an approach which not only acknowledges the advantages of drones as an efficient technology, but also asserts how addressing the human rights threats can in fact improve the effectiveness of its use in law enforcement.
65. Poyton, David (Aberystwyth University). The ‘Intangibles’: A Veritable Gordian Knot. Are we Slicing through the Challenges? Or Unpicking them Strand-by-Strand?
Keywords: Digital Assets, Electronic Trade Documents, Computer Software, Classification, Digital Economy, Property
Abstract. This paper examines the thought-provoking era we find ourselves in with the enduring debate surrounding, as Negroponte observed, the shift from ‘atoms to bits’. Three prominent contributors to the digital economy provide the focus for our analysis; computer software, electronic trade documents, and digital assets.
In recognition of the transfer of economic value from the physical to the intangible we have seen a spell of concerted activity from the Law Commission, including projects on electronic trade documents and digital assets. In the courts, we have observed the common law judiciary, and the Court of Justice of the European Union, contending with challenging disputes relating to the data-based products of our digitised society.
Traditional legal constructs of property and possession are in line for amendment in order to accommodate electronic trade documents and digital assets. At the same time, computer programs and packaged software products are likely to fall beyond the scope of the proposals for a ‘new’ regime for digital assets.
At the time of writing, the proposals result in the following observations:
– Electronic trade documents, comprised of appropriate information in a technologically secure format, will be afforded equal statues to paper trade documents. They will be possessable and transferable.
– Digital assets which meet qualifying criteria will be classified as ‘data objects’, a new form of personal property. The notion of ‘control’ of a data object will be treated as analogous to the possession of a physical thing. As such, proprietary rights and remedies will be available in disputes relating to data objects.
– Computer programs and packaged software products (i.e. the executable and non-executable elements of a piece of software) are not likely to qualify as ‘data objects’ under the new regime. This would leave the common law in the highly unsatisfactory position found prior to, and resultant of, the Software Incubator litigation. Vis-à-vis rights and obligations dictated by the carrier medium by which the software is supplied, unless the dispute relates to a ‘commercial agent’, in which case software sold or supplied on perpetual license are to be treated as ‘goods’.
In the analysis we ask whether we are approaching a coherent solution, fit for the future of our digital economy, or potentially risking further fragmentation of legal doctrine and creating future problems? We conclude that the complexity and diversity of the challenges associated with moving from ‘atoms to bits’ make the question perhaps an unfair one at present. However, we also contend that a coherent but flexible solution is necessary in order to avoid a perpetual cycle of redefining rules and legal constructs, particularly where the slow machinery of statutory intervention is seen as necessary.
Finally, we suggest the adoption, in statutory form, of a complementary concept to support the common law’s ability to adapt to technological developments. Functional equivalence is a well-established and workable construct which can, in appropriate circumstances, support the courts in evolving the law to reflect emerging characteristics of the digital society.
66. Przhedetsky, Linda (University of Technology, Sydney) and Bednarz, Zofia (University of Sydney). Algorithmic Opacity in Consumer Markets: Comparing Regulatory Challenges in Financial Services and Residential Tenancy Sectors
Keywords: artificial intelligence, regulation, algorithms, datafication, privacy, governance, finance, tenancy
Abstract. Algorithmic opacity in consumer markets brings about significant risks to fundamental rights, consumer rights and the rule of law. In the financial services and residential tenancy sectors, algorithmic opacity not only facilitates, but masks the occurrence of key harms including discrimination, unfair treatment, and ongoing consequences such as consumer blacklisting which may serve to catalyse or exacerbate existing disadvantage.
Przhedetsky and Bednarz compare how algorithmic practices of ‘sorting’ consumers through opaque scoring, rating, and ranking systems manifest across the two sectors and the regulatory implications of the differences between the industries. In financial services, the use of credit scoring practices has been prevalent for a long time, while the residential tenancy market is only just beginning to see sorting technologies being used to determine which applicants are deemed suitable for a property. Despite differences in the penetration of sorting technologies in each of these markets, both pose significant challenges for regulatory intervention: opaque algorithms make it difficult to identify harms, and a lack of compelling evidence makes it difficult to mount legal challenges.
Though both sectors use opaque algorithms to determine consumers’ access to products or services, the unique market dynamics in each sector make it difficult to devise effective regulatory strategies. In a competitive rental housing market, multiple consumers typically compete for a singular property, while in the financial services market a typical customer will be choosing from multiple financial products. In some instances, one’s credit score may be taken into account when applying for housing, blurring the boundaries of these two sectors.
Drawing on international examples, the authors show that existing legal and regulatory frameworks typically require consumers to operate in ways that favour corporate interests and enable algorithmic opacity across financial services and residential tenancy sectors. Further, they critique existing approaches to curtailing algorithmic opacity, including the limitations of the EU’s proposed Artificial Intelligence Act. Finally, they argue that algorithmic opacity should be a regulatory target in specific consumer contexts, and put forward suggestions for how this may be explored in future.
67. Quintais, João Pedro (University of Amsterdam, Institute for Information Law) and Kuczerawy, Aleksandra (Centre for IT & IP Law, KU Leuven). “Must-Carry” Obligations for Online Platforms: Between Content Moderation and Freedom of Expression
Keywords: Online Platforms, Content Moderation, Must-carry, Fundamental Rights, Freedom of Expression
Abstract. Over the past decade, online platforms have become crucial fora and gateways for dissemination and access to information and individual expression online. As a result, they are also the focus of policy discussion on how the law can and should regulate illegal and harmful activities online.
It is possible to observe a policy trend towards States co-opting platforms to regulate expression online, thus privatizing enforcement of public policies on speech. Whereas this movement initially took place through indirect means (e.g. liability exemptions for intermediaries), it is currently characterized by an increasing use of direct delegation mechanisms. But delegation of enforcement measures to private entities is problematic. First, because private platforms are not competent to make decisions on fundamental rights, a role traditionally assigned to the judiciary. Second, because they do not carry out a proper balancing of the competing rights at stake. The result is that privatized enforcement risks posing undue restrictions to the freedom of expression of online users.
To address criticism and mitigate this risks, European policymakers started embedding in new instruments mitigation measures and safeguards to allow for more effective protection of users’ fundamental rights. For the most part, these countermeasures are natural developments of and updates to existing intermediary liability duties and obligations, as those found in the revised liability framework chapter of the Digital Services Act (DSA). Other countermeasures, however, are harder to conceptualize. In particular, under the umbrella term of “must carry” obligations, a new class of rules has been persistently proposed in legislative and judicial contexts at EU and national level. These include, inter alia, prohibitions to moderate or remove content originating from predefined sources (e.g. elected officials) or addressing specific topics, obligations to preserve or prioritize content for public interest reasons, , obligations to reinstate content that has been subject to removal, and put-back orders by politically appointed councils.
Are these obligations analogous to the must carry obligation known in traditional broadcasting regulation? They arguably give rise to important questions from the perspective of fundamental rights. For instance, can platforms be forced to carry legal content that they would otherwise prohibit in their “house rules”? Do these obligations create a right to a forum on private property?
This paper aims to examine this new class of content obligations, inquire whether they might constitute a novel form of “must carry” mandate for platforms, and explore their implications for the right to freedom of expression online. Our legal doctrinal analysis is carried out from the perspective of the European Convention of Human Rights, interpreted by the European Court of Human Rights, and EU law, in particular the Charter of Fundamental Rights, as interpreted by the Court of Justice of the EU.
68. Rachovitsa, Mando (University of Groningen). “It’s Not in the Cloud!”: The Data Centre as a Singular Object in Cybersecurity and Critical Infrastructure Regulation
Keywords: data centre, cybersecurity, critical infrastructure
Abstract. This paper focuses on the data centre as a physical object to think about and analyse recent developments in regulating cybersecurity and critical infrastructure (CI) under international law and EU law. Some insights will also be given from the UK. The discussion first briefly explains why data centers have been thus far an overlooked aspect of cybersecurity and then proceeds to show how recent legislative developments (e.g. NIS2, Cyber Resilience Act) place a pronounced emphasis on data centres.
The data centre is rarely addressed in discussions about CI resilience and cybersecurity. One may say that the metaphors embedded in how we think about data and the Internet infrastructure (e.g., “in the cloud” or digital/cyber domain) create a tunnel vision which limits our view of the significance of the material infrastructure. The invisibility of the material infrastructure is arguably more notable when it comes to data centres (comparing, for example, to fiber optic cables or satellites which have already attracted certain scholarly and policy interest). Said invisibility is ironic given the fact that data centres are a key link in digital supply chain and the “beating heart” for big data and the Internet of Things (in parallel with the development of edge computing). It may be therefore argued that data centres form a meta-infrastructure connecting many critical sectors, including critical governmental data and services, transportation, energy, finance etc.
CI has now started to revolve around not only sectors but also specific objects bringing to the fore the criticality of, among others, the data centre as single point of failure. This does not mean that data centres escaped regulation. There are specifically designed ISO standards and EN50600 which set out specific privacy and security related requirements. On the international law level data centres are part of CI and they qualify as part of the “public core” of the Internet.
Crucially, data centres now fall within the remit of the NIS 2 Directive and the analysis will explore the applicable cybersecurity requirements. Although NIS 1 Directive included digital infrastructure in its Annex II, data centres were not included therein (it was only IXPs; DNS service provides and TLD name registries). In contrast, data centres are now encompassed in digital infrastructure as essential entities under Annex I (regardless of whether they are not cloud computing services too). The cybersecurity relevance of data centres was also acknowledged by the 2022 Data Centre Security by the UK National Cyber Security Centre and Centre for the Protection of National Infrastructure. Finally, the paper explores whether data centres fall within the scope of the Cyber Resilience Act and, in particular, whether they qualify for ‘product with digital elements’ and hardware within the remit of this instrument.
69. Ramirezmontes, Cesar (Leeds University). EU Trade Marks and Community Designs in the Metaverse
Keywords: metaverse, brands, designs
Abstract. This paper examines the new legal challenges and opportunities for brands and design rights in the metaverse, particularly from the perspective of EU law. The metaverse is poised to become the next natural evolution of Web 2.0, with Gen-Z consumers devoting ever more personal time to online interactions and businesses seeking to expand their products and services offers to the metaverse. NFTs have been discussed extensively in the context of copyright law. However, trade marks and Community design legislations, both of which concern “consumer experiences”, are more likely to have a much greater role in the metaverse than any other IP rights. Indeed, at heart of the metaverse lies a wider range of consumer experiences from experiencing products in virtual or augmented reality to digital interactions in the form of concerts and massive multiplayer games. The metaverse is also likely to blur the distinction between virtual goods and physical goods, thereby representing a significant challenge to trade mark law and Community designs both of which centre protection around “products”. This paper therefore considers the metaverse challenges for EU trade mark law in the context of classification of goods, protectability criteria (representation and distinctiveness), infringement actions (double-identity, confusion, and dilution/unfair advantage) and defences/limitations (whether human rights may have a role to play. A similar approach will be adopt to Community designs, except that the protectability criteria will cover novelty-destroying disclosures and individual character as regards the informed user and designer freedom. This paper makes the argument for more attention to be paid to Community designs as the most suitable ad more relevant form of protection in the metaverse.
70. Rebrean, Maria (Leiden University – eLaw – Center for Law and Digital Technologies). Giving my Data Away: A Study of Consent, Rationality, and End-User Responsabilisation
Keywords: consent, GDPR, rationality, end-user responsibility
Abstract. An underlying assumption for the European Union’s body of digital legislation is the rendering of the end-user as the actor responsible for making rational decisions. The General Data Protection Regulation’s (GDPR) consent requirements and the Digital Services Act both encourage involvement of the end user in the protection of their digital self, including their personal data. In this context, it is essential to observe (1) the limits of end-user rationality, and (2) how the law can protect end-users despite irrational and miscalculated choices. To reflect on these dimensions, this presentation overviews the findings of a research survey on end-user’s decisions regarding cookie consent notices (CCNs).
This quantitative study observed whether the end-user conducts rational action for their online protection (i.e., rejecting CCNs). Overall results demonstrate that an individual’s decision to grant consent is influenced by their prior knowledge rather than their active interaction with CCNs. Individuals with an advanced understanding of data collection practices and the ability to adjust their privacy settings are more likely to ‘reject’ data collection via the CCN intermediary than their counterparts with limited understanding levels. Nevertheless, the rationality premise implies that, despite their abridged knowledge, individuals who can access additional information will also reach the most advantageous choice. Although the existence and content of GDPR-complying CCNs should mitigate the effect of prior understanding and facilitate end-user rational decision-making, most respondents revealed that they do not read the provided information. Their reasoning includes motives of frequency and similarity, time availability, and personal inconvenience.
The output raises a theoretical question about the core principles guiding European digital law-making. If consent and rationality remain fundamental to digital law, new regulations must foresee and enforce adequate measures that bypass bounded rationality within digital decision-making. The law’s role is to mobilise state bodies to pursue data protection, shielding individuals from the effects of their wrongfully allocated consent. Because individuals rely on their subjective and previously accumulated knowledge, and because empowerment of the end-user does not result in the manifestation of rights, new legislation must attempt the propagation of additional sources of protection; for example, legal safeguards, default browser settings, and auditing.
The presentation of this study’s results on users’ interactions with CCNs addresses the discussion on the individual’s legal responsibility to defend themselves online and the viability of digital law that is based on the rationality assumption.
71. Romero Moreno, Felipe (Hertfordshire Law School). Deepfake Technology: Making the EU Artificial Intelligence Act and EU Digital Services Act a Human-Rights Compliant Response
Keywords: Deepfake technology, EU Artificial Intelligence Act, EU Digital Services Act, Human rights
Abstract. The EU Artificial Intelligence Act, and EU Digital Services Act, lay down new duties for law enforcement and internet intermediaries governing the use of AI systems deployed to create, manipulate and/or detect image, audio, or video content (or so-called ‘deep fakes’). However, both pieces of legislation warn that the use of these AI systems must also be compliant with human rights such as, the right to freedom of expression, privacy, and data protection, under the European Convention on Human Rights, the EU Charter of Fundamental Rights, and the General Data Protection Regulation (GDPR). The purpose of this paper is thus two-fold. Firstly, to critically assess the extent to which, under the EU Artificial Intelligence Act, and EU Digital Services Act, the use of AI systems deployed to create, manipulate and/or detect image, audio, or video content is compatible with the right to freedom of expression, privacy, and protection of personal data as per the ECHR, EU Charter, and GDPR. Secondly, to suggest and appraise some procedural safeguards, to make the use of such AI systems compatible with the Convention, the EU Charter and the GDPR. The analysis draws on the EU Artificial Intelligence Act, the EU Digital Services Act, the GDPR, the case-law of the European Court of Human Rights (ECtHR) and Court of Justice of the EU (CJEU), and academic literature. The paper critically examines the compatibility of the use of AI systems deployed to create or manipulate image and audio-visual content with the ECtHR’s three-part, non-cumulative test to determine whether the duties set out in the EU Artificial Intelligence Act and the EU Digital Services Act can be implemented in a way: firstly, that it is ‘in accordance with the law’; secondly, that it pursues one or more legitimate aims contained in Article 8(2) and Article 10(2) of the Convention; and thirdly, that it is ‘necessary’ and ‘proportionate’. It proposes that for the deployment of deepfake technology to be a human rights-compliant response, the intention, knowledge of alleged unlawfulness, and profitability associated with the use of these AI systems are fundamental. The paper also critically evaluates the compatibility of AI systems used to detect deep fakes with the GDPR, and the prohibition of general monitoring obligations. It proposes that for automated individual decision-making including profiling (Article 22 GDPR), to be human rights compliant, the use of AI systems to detect deep fakes should also observe two key tenets, the ‘data protection by design’, and ‘data protection by default’ approaches, under Article 25(1) GDPR and Article 25(2) GDPR respectively. The paper seeks to fill a major gap in the existing literature. It concludes that unless, the procedural safeguards suggested in the paper are heeded, the duties laid down in the EU Artificial Intelligence Act, and the EU Digital Services Act, governing the use of AI-based deep fakes would be inherently difficult to align with the Convention, the EU Charter and the GDPR.
72. Rosenberg, Roni (Ono Academic College, Law Faculty). Cyber Harassment, Revenge Porn and Freedom of Speech
Keywords: Cyber Harassment, revenge porn, criminal law, freedom of speech, First Amendment
Abstract. In recent years the distribution of intimate images without the consent of the victims has become an epidemic around the world. Unfortunately, this phenomenon expanded even more during the Coronavirus pandemic. The nonconsensual distribution of an intimate image is sometimes motivated by revenge following a failed relationship and is typically gender-related, in view of the fact that the majority of victims are women. For that reason, the phenomenon has come to be known as “revenge porn,” even though the term does not cover the full range of cases in which intimate images are disseminated without consent.
Today, the accessibility of the Internet, social media, and messaging apps has created convenient and easily accessible platforms for disseminating sexually explicit materials. In addition, those platforms have made it very difficult to identify the distributors due to the anonymity of those platforms and the fact that there are secondary disseminators.
Due to the unique characteristics of the virtual domain, the phenomenon of revenge porn has far-reaching implications for the victims. The harm suffered by the victims affects all aspects of their lives: psychological, mental, emotional, physical, social, occupational, and economic. Nervousness, hysteria, depression, and anorexia are known to be common symptoms among victims of revenge porn. From the economic perspective, because revenge porn images are often disseminated with identifying information and the name of the victim’s employer, some victims are subsequently laid off. This outcome is exacerbated by the fact that the vast majority of employers search online for information on job candidates and 70 percent of applications are rejected based on search results. To minimize the damage, many victims stop using social media and close their email accounts or a blog they had maintained.
By 2022, almost all U.S. states had criminalized revenge porn. However the legislation is eclectic and sporadic. One of the principal reasons for this diversity, is the fact that revenge porn laws are perceived as violating the First Amendment right of freedom of speech. Therefore, many states have tried to limit its scope in order to minimize the potential violation of freedom of speech. The limited law deepened the inequality caused by this phenomenon.
Contrary to the vast legislation in the U.S. and to the courts’ decisions I will argue that revenge porn should be categorized as a sex offense and that this reconceptualization has implications regarding freedom of speech. In this framework I will also examine the main differences between the existing law in U.K. regarding revenge porn and the law in the U.S.
73. Rosli, Wan Rosalili Binti Wan (School of Law, University of Bradford) and Hamin, Zaiton (Faculty of Law, Universiti Teknologi MARA). The Legal Response to Cyberstalking in Malaysia
Keywords: Anti-Stalking Law, Cybercrime, Cyberstalking, Criminal Justice, Motivation
Abstract. In the past two decades, the Internet has been an integral part of our daily lives. The dependency on the Internet and having unlimited access to modern communication systems has brought numerous benefits for users worldwide. However, as a double-edged sword, such systems has also generated a high degree of risk of victimisation from a myriad of cybercrimes, including cyberstalking. Evidence has indicated that cyberstalking has led to more heinous crimes such as cyber fraud and cyber defamation through data mining and social engineering. Moreover, severe repercussion occurs when the crime transcends into the real world which results in rape and even murder. Given the severe impacts of cyberstalking and the nature of such crime in other Western countries, the perception of the law’s adequacy remains vague in the current Malaysian legal landscape. Hence, this paper aims at examining the perception of the criminalisation of cyberstalking in Malaysia, the various motives of cyberstalkers in the commission of such crime and the protection afforded to victims of cyberstalking. The paper will also discuss the legal response to cyberstalking in the United Kingdom and European Union focusing on how these countries govern such crime within their jurisdiction. This paper adopts a qualitative methodology, of which the primary data is generated from semi-structured interviews with the relevant stakeholders and victims. The secondary data are the Communications and Multimedia Act 1998, the Penal Code, books, academic journals, online databases, and other library-based sources. The sampling method in this research is purposive sampling and the qualitative data analysis was conducted through thematic and content analysis, in which the observations and the interview transcripts from the semi-structured interviews were examined. The primary data were triangulated with the semi-structured interview data obtained from an officer from the Ministry of Communication and Multimedia and another officer from the Ministry of Women, Family and Community Development. The findings revealed contradictory views on the effective response of the criminal justice system towards cyberstalking, which explains the under-reporting of such crime. Reports has also highlighted that more than 70 per cent of stalking victims are reluctant to lodge a police report due to the belief that enforcement officers would be unhelpful. The findings highlighted that the under-reporting by victim and under-recording by police combined with frequent unresponsiveness of prosecutors and judges leads to significant barriers for effective criminal justice responses to stalking offences. Also, the motivations of cyberstalkers were evident, such as jealousy and obsession. Furthermore, the stalkers may share traits such as envy, a pathological obsession, including professional or sexual fixation, unemployment or failure with their job or life, and a cruel intention to intimidate or cause others to feel inferior Significantly, the findings illustrate that the current Malaysian legal framework on cyberstalking is deficient in protecting cyberstalking victims, which calls for an urgent need for a review in the Malaysian laws.
74. Samek, Martin (Charles University, Faculty of law). New EU Regulation and Consumer Protection: Are National Bodies up to the Task?
Keywords: consumer protection, national authorities, enforcement, regulation, EU
Abstract. One of the most talked about areas of law in the context of the EU is the new digital “acts”. These have many new features, including a relatively new form of governance that combines a standard European level with high demands on national authorities, where existing authorities will often have to be transformed or modified.
In the areas like personal data protection, we have become accustomed to having competent and educated authorities overseeing compliance with these obligations. Now questions of areas of competence overlapping are being raised with new digital legislation like Digital Services Act or Digital Markets Act. There is also the question of competency on the personal level of employees.
Not getting as much attention in this sense are the changes brought about by consumer protection regulation in the digital environment, recent examples of which include the Omnibus Directive and the Digital Content Directive. These directives bring new rights for consumers and obligations for online service providers, whether e-shops or providers of various digital services. However, consistently monitoring and enforcing these obligations is essential for effectively applying consumer rights. For many decades and centuries, we have made do with national authorities such as the trade inspectorate or competition and market or consumer protection authorities on a national level. The extent of their jurisdiction and powers varies from country to country. However, the scope of the work does not differ in general terms, i.e., the content and manner of inspections.
In the last two decades, the market has changed, bringing new regulations with a new role for national authorities. The authorities will have to control more than just beverages poured under the limit, missing receipts, and similar “analogue” offences. Now they will have to check what algorithms online traders use or how they rank their offers. Personal, financial, and technical resources will all be tested.
With the results of quantitative research that is based upon talks with stakeholders within different national regulatory authorities (directors, directors of legal sections,…), with the focus on my national Czech Trade Inspection Authority, and analysis of new and proposed regulation, I aim to a) identify key areas and requirements that may be problematic for national authorities concerning new digital consumer rules, b) assess the readiness of authorities in mind based upon these requirements and c) identify good practices and propose general and specific ideas to prepare national authorities for the new digital era.
75. Scharf, Nick (UEA Law School). 3A.M. Eternal? What The KLF Can Teach Us about the Past, Present and Future of Copyright
Keywords: Copyright, Music, Sampling, Creativity
Abstract. Formed in 1987 by Bill Drummond (‘King Boy D’) and Jimmy Cauty (‘Rockman Rock’), what subsequently became known as The KLF enjoyed a relatively brief, yet spectacular career at the vanguard of music production and creative plagiarism. Demonstrating an awareness of how technology would alter popular music in the late 1980s, they began an artistic journey to ‘assassinate the author’ alongside broadsides at capitalism and contemporary culture, which ultimately embodied their contempt for the music industry. Following their award for ‘Best British Group’ at the BRIT Awards in 1992, they committed a deliberate and spectacular act of self-sabotage involving (amongst other things) deleting their entire catalogue; something made possible because they had set up, and therefore owned, their own DIY record label. According to Drummond, “I believe that the creative and forward looking music makers of the twenty-first century will not want to make music that can be listened to wherever, whenever, while doing almost whatever … They will want to make music that is about time, place, occasion, and not something that you can download and skip over on your iPod.” (McLeod, 2009). In many ways, the story of The KLF parallels that of copyright over the last few decades; moving from relative obscurity to centre-stage in the public consciousness and playing a fundamental role in the regulation of creativity. Currently, it could also be seen as the central pillar in The KLF’s contempt for the industry copyright allegedly supports. Having led to the establishment of industry structures which now legitimise the previously subversive practices of sample-based creativity, Drummond opined that sampling itself was subsequently losing its attraction as an art form. Whilst musical trends do evolve, this presentation will argue that such evolution, in contrast to Drummond’s assertion above, has now resulted in music being something which, more often than not, is listened to wherever, whenever, whilst doing whatever. Whilst the music industry may assert that copyright has ‘got its act together’, the trend of listener passivity can be seen as a consequence of the evolution of music streaming services as the predominant form of music consumption which, despite allowing more creators than ever to release music (CMA, 2022), has now created its own problems and led to a new wave of infringement lawsuits based on contemporary musical trends and the compositional techniques which successful artists have adopted to succeed. Whilst it is clearly not the case that ‘ALL RECORDED MUSIC HAS RUN ITS COURSE’ (Drummond, 2008), does the current copyright framework support any other outcome in the future?
76. Shattock, Ethan (Maynooth University). Knowledge of Deception: Intermediary Liability for Disinformation under Ireland’s Electoral Reform Act
Keywords: Disinformation, Human Rights, Freedom of Expression, Free Elections
Abstract. This paper applies a critical perspective to provisions of Ireland’s Electoral Reform Act which impose obligations for online intermediaries to restrict access to online disinformation. The Electoral Reform Act has been proposed for over a decade. This landmark legislation not only provides urgently needed reforms to Ireland’s online political advertising regime but also introduces explicit obligations for online intermediaries to combat disinformation in election periods. Significantly, several of these obligations diverge from established European standards surrounding intermediary liability. Applying a human rights framework, this paper maps how the Electoral Reform Act imposes a new domestic regime surrounding intermediary liability for deceptive electoral communications. This paper critiques this regime by condensing human rights standards from case law of the European Court of Human Rights (ECtHR) and the Court of Justice of the European Union (CJEU) and illustrating how this legislation diverges from these standards. This paper further evaluates specific roles of the proposed Electoral Commission to promote informed electoral engagement and will propose how this aspect of the Commission’s role could be refined to align with human rights standards.
77. Siliafis, Konstantinos (Canterbury Christ Church University) and Colegate, Ellie (University of Nottingham). Addressing the Potential Pitfalls of the UK’s Online Safety Bill’s Provisions in Relation to Adults
Keywords: Online Safety Bill, Content Regulation, Online Harms, Social Media, Encryption
Abstract. The Online Safety Bill promises to revolutionise how potentially harmful content is identified, regulated and managed online. Covering both search services and social media platforms, the new regulatory regime is set to cover a large amount of the surface web, encompassing varying content types and venues. At its core, one of the main objectives of the Bill is to reduce the presence of potentially harmful content online, however, this is where a potential issue lies. The Bill in its current iteration promises to reduce the presence of harmful content for adult users. With multiple discussions, proposals, and revisions being made to the provisions mandating the steps platforms will need to take to protect adults. In addition, the development of the role of the regulator, OFCOM, has also occurred with the body now having specific powers to assist them in overseeing the implementation and adherence to the new provisions. The Bill has so far been heavily criticized both in terms of its wide aspirations as well as its potential failures in application post-enactment. This paper will attempt to consider those criticisms and focus primarily on the medium and long-term challenges both for regulators as well as users. For instance, considering the proposed OFCOM’s enforcement capacity and the ambitious outlook for the reduction of online harmful content, needs to be considered from a critical perspective to ensure suitable and efficient operation of the proposed framework.
As a proposed regulatory regime, the provisions of the Online Safety Bill are aiming to cover both public spaces on the internet, such as social media platforms, as well as more private, ‘hidden’, spaces like encrypted messaging platforms. Users are not prohibited from using such sites nor is it indicated that such would occur as part of the new regime. The encryption aspects however are likely to present significant challenges both for regulators as well as those with enforcing capability after the Bill gets enacted.
Looking at the overall regulatory regime, this paper will assess if, in the long term, the plans as currently mapped could effectively reduce harmful content that adult users are exposed to. The eventual role of OFCOM as a regulator will also be considered, as well as a discussion of the potential challenges and issues that could arise reducing the overall likelihood of the regime being effective as currently worded. Utilising the Bill, associated documents, and OFCOM’s indicative Online Safety Roadmap published in July 2022, this paper will consider the medium and long-term effectiveness of the current Online Safety provisions and consider if this is the most suited plan in the long-term and present alternatives should there be a need to change course.
78. Sinclair, Alexandra (LSE). ‘Gaming the Algorithm’ as a Defence to Public Law Transparency Obligations
Keywords: Algorithms, Administrative law, Social welfare, Transparency
Abstract. The British state is in the midst of what it describes as a ‘digital transformation of government’. Algorithms are increasingly deployed by UK government agencies and public authorities to assist in areas such as visa processing, and fraud detection . Use of these technologies is resulting in two distinct transparency deficits. Firstly, the very nature of algorithmic decision-making systems means they are invisible to people. Members of the disabled community investigated for benefit fraud have no idea an algorithm flagged up their case for review. Secondly, even where it is known that an algorithm played a role, the UK government frequently refuses disclosure of any information about its operation. For example, the Home Office has disclosed only three of the eight features used by its sham marriage visa algorithm on the basis that disclosure will prejudice the operation of the algorithm through enabling the public to ‘game’ it.
This article looks at the extent to which algorithms can be gamed if their key features are disclosed. It then examines the doctrine of transparency under administrative law and the extent to which it might require disclosure of features used by automated systems
79. Soukupová, Jana (Charles University). Digital Assets, Digital Content, Crypto-Assets, Data and Others: Are We on the Road to a Terminological Confusion?
Keywords: digital assets, virtual property, digital content
Abstract. For a while, legal literature has been trying to deal with the phenomenon of host of “things” being represented in a digital form and carrying a value (either economic, social, or other) to the users. Therefore, notions such as digital assets, virtual assets, virtual property or res digitales started to appear across the legal literature. The understanding of these notions varied and instead of finding similar traits, many authors tried to coin their own notion and definition. And although the phenomenon described by the legal scholars had the same ground (it being an object with a value in a digital representation), the notions did not always cover or describe the same phenomenon. There was also never an agreement on whether this phenomenon should be covered by an analogous use of the current legal institutes, or whether a new concept (e.g. a control) that would better reflect their nature should be introduced. At the same time, consumer law came up with digital content, although the concept and the context of this notion slightly differs from the academical notions of virtual property and digital assets.
With the new EU digital legal framework, we can observe that the lawmakers continued with the notion of digital content. Moreover, we can see terms such as digital services or data finding its new terminological life. The new notions, however, come with many issues concerning their systematic inclusion under the existing legal institutions and the Members States are left to decide their own systematic. Therefore, we can observe that there are many notions and definitions trying to embrace the digital reality, both in the field of academic works and in the new framework.
The aim of this paper is an analysis and comparison of the notions in the legal literature and in the current digital framework. Specifically, the author would like to analyze and compare how the legal theory concerning digital assets/virtual property/data complies with the current notions in the new EU digital framework, whether the context and aim of the regulation is the same as the one in the legal theory and how the new notions interact with the classic institutes of (in)tangible things or copyrighted works. This will be observed from the view of the Czech law.
80. Sumer, Bilgesu (KU Leuven). Keeping Track of the Regulation of Biometric Data within the EU Cyberlaw: Legal Overlaps and Data Protection Challenges
Keywords: biometric data, GDPR, AI Act, eIDAS, data protection, identity
Abstract. Biometric data processing is increasingly becoming omnipresent in a wide array of application fields, from law enforcement to digital identity management. Since the introduction of the GDPR, The EU legislator has reacted to these novel developments in the EU with several numbers of regulatory instruments and proposals. The GDPR, as the first to specifically regulate biometric data, is now intended to be supported by proposals such as AI Act and eIDAS 2.0 that explicitly mention biometric data processing. While the former targets biometric data processing in identification scenarios, the latter mentions biometric verification as a level of assurance factor. The proposed AI Act does not consider biometric verification among high-risk AI applications. However, the fine line between identification and verification is becoming blurrier with the novel verification technologies and the use of biometric data processing as a method of access control element at the EU borders, which is subject to another set of Regulation, e.g., The Regulation (EU) 2019/1157 and Regulation (EU) 2018/1862 (SIS Regulation).
This study attempts to map out the regulation of biometric data, answering the following research question. Which EU regulatory instruments explicitly or implicitly govern biometric data processing, and how do they regulate it? To answer this research question, a holistic approach is taken to draw a comprehensive picture of the risks pertaining to biometric data processing that this emerging complex biometric cyberlaw might further exacerbate. Therefore, the research objectives of this study are fundamentally considered descriptive and explanatory.
The article first describes the technical features of biometric data processing and explains the primary reasons for their wide adoption and regulation. Then it maps out the EU regulatory instruments that govern biometric data, classifying them following their scope and purpose. Finally, it discusses the potential overlaps and legal, technical, and ethical challenges that might arise from this scattered regulatory practice. The discussion mainly focuses on the thin line between biometric verification and identification functions, particularly when processed centrally, and the growing risks of function creep.
81. Sümeyra Doğan, Fatma (Jagiellonian University). Re-Use or Secondary Use: A Comparison between Data Governance Act and European Health Data Space
Keywords: Re-use, secondary use, health data
Abstract. With newly introduced EU legislations, novel terms entered into both our lives and legal and technical terminology. Data Governance Act (‘DGA’) was accepted in 2022 and will come into force in September 2023 as another main pillar of the European Union data strategy. One of the four main goals listed in the first article of the DGA is “conditions for the re-use, within the Union, of certain categories of data held by public sector bodies”. The term is defined in the second article of the DGA and refers that re-use means using the data held in public sector bodies other than the initial purposes for which data was collected for the mentioned public task including for commercial or noncommercial purposes. The following articles set out further rules regarding this framework of data sharing. Another recently introduced legislative text is the proposal of the European Health Data Space Regulation (‘the Proposal’) which presented the first data space of the domain-specific common European data spaces on May 3, 2022. The Proposal was born from the need of harmonizing healthcare throughout the Union as stated in the memorandum of the Proposal and thus, it has numerous innovative provisions. One of them is ‘secondary use of health data’ which is mentioned in section IV of the Proposal. The Proposal has a distinctive approach from DGA and defines the term by setting out certain purposes. According to the Proposal, any use of collected data will be considered secondary use so long as it will align with the listed purposes. In the DGA, the term “re-use” is used for processing data for purposes other than primary purposes without specifying any purposes, on the other hand, the Proposal has listed numerous clausus purposes which will be considered valid in this regard. However, the two legislations have similarities as well in terms of foreseeing a similar process to re-use and secondary use of data with putting in charge of public bodies. When it is taken into account that both of the legislations will be regulating an intersecting ground, their differences have a possibility to create discrepancies. Considering the fact that a public hospital operating in the Union could be subject to both of the legislations begs the question of which one will prevail. In this study, it is aimed to discuss the similarities and differences between the two legislations and make sense of their interplay.
82. Sutter, Gavin (Queen Mary University of London). Qui Docet, Discit: Some Reflections on Lessons Learned Across Two Decades of Teaching an Online LLM
Keywords: LLM, internet, distance, learning, teaching
Abstract. January 2023 marks the twentieth anniversary of the first intake on what was then called the LLM in Computer & Communications Law (By Internet) at CCLS, QMUL. From early beginnings with online, text chats and course commentaries distributed via post on CD Rom, the programme has evolved with technological development, and our experience grown along the way. A global pandemic saw this offer a model to follow in some respects for our delivery across all courses, yet even then lessons were learned that are still feeding back into our online delivery. This presentation is intended as a reflection on experiences in delivering online teaching via the internet across that period, with a view to continuing the discussion about the role of online education in the post-pandemic era.
This paper will be intentionally experience-based rather than theoretical. Of its nature it will also be somewhat anecdotal, however the conclusions to which it will come should nonetheless be of interest in the wider context.
83. Terzis, Petros (UCL) and Veale, Michael (UCL). Foundations for Regulating Computational Infrastructures
Keywords: infrastructures, platforms, computational infrastructures, normative digital policy, platform regulation, infrastructural regulation
Abstract. From their glassy, pocket-sized rectangular forms, and the way individuals treat them, smartphones look personal. They look like relatively passive conduits for digital services. Legislators are often attracted to the surface level of this system when thinking about regulation. Recent legislation considers the functionality of surface-level features such as apps, payment systems or identification services, but often misses the technical innards that firms have hidden under premises of usability, reliability or security. However, under these topmost visible layers lurk extensive, complex, global infrastructures of sensing and computation. These infrastructures have an important, expanding role in intermediating economic and political systems, yet rarely are in focus within regulatory debate. Their functional scope, technical complexity and broad, open-ended potential leaves them hard to understand, let alone to reason about alternatives to them. In this paper, we attempt to remedy this gap.
First, we outline the nature of these ‘computational infrastructures’ and the issues they generate, with policy-relevant examples designed to overcome the difficulties in communicating in this complicated area. We then survey existing regulatory approaches towards digital infrastructures, as regulating vertically integrated firms and infrastructures or digital intermediaries, platforms, communications networks, software or code more broadly is not especially new. However, we argue that complex contemporary networks of sensors and computation strain what our current legal thinking patterns and mental models are trained to confront and regulate. We then outline the characteristics of new foundations that regulatory regimes in this area should build upon, and the daunting hurdles they will have to overcome to do so. Understanding and addressing the challenges of computational infrastructures will require a significant shift in regulatory and legislative approach, and will bring new transnational challenges. Such challenges must be addressed sooner rather than later if democracies are to take the lead in determining the nature of the digital societies they wish to steer towards, rather than be steered by the infrastructural orchestrators that surround us today.
84. Tur-Sinai, Ofer (Ono Academic College) and Helman, Lital (Ono Academic College). Bracing Scarcity: Can NFTs Save Digital Art?
Keywords: blockchain, NFT, copyright, intellectual property, digital art
Abstract. This Paper addresses a fundamental question that lies in the intersection between copyright law and the technology of NFTs (non-fungible tokens). An NFT is a unique ‘token’ that links to a digital asset, such as a digital artwork, on a blockchain platform. One of the challenging questions that have arisen in connection with NFTs has to do with copyright law’s position regarding unauthorized minting of an NFT that links to someone else’s work. As surprisingly as this may sound, the answer is not at all obvious under extant copyright law. To be sure, this is not a merely theoretical question. In recent years, there has been a growing number of reported cases of unauthorized minting, in some of which a lawsuit is pending. This paper attempts grappling with this question.
In order to form a normative stance regarding the question of unauthorized minting, we analyze the market for NFTs and conclude that its promise – if exists – can only be fulfilled if the right to mint an NFT is left at the hands of the copyright holder of the work that the NFT links to. As our analysis demonstrates, the promise that NFTs bring to the world of digital artwork lies in their potential to revive the scarcity of works in the digital arena so that creators can profit from their digital art. We explore other ways by which NFTs could benefit both artists and collectors of digital artwork (and by doing so, enhance social welfare) and explain why these benefits are dependent upon securing the link between copyright law and the NFTs sphere. The normative discussion is held under the lens of fundamental theories underlying copyright law—including the utilitarian theory and personality theory.
After establishing the normative case for barring unauthorized minting, we move on to examine what legal mechanisms could effectuate this result. The immediate candidate is copyright law itself. Yet the exclusive rights of the copyright owner do not seem to capture the minting of an NFT, i.e., the creation of a blockchain token that contains a link to the digital file. Some other actions, which often accompany the minting of a new NFT (including storing a copy of the work (before its minting) and offering it for sale (after its minting)) may involve copyright infringement. In many cases, however, the sale of an NFT can be affected without performing these incidental actions in a manner that infringes copyright, and, hence, relying upon them as the sole foundation for policy in this area does not make sense. As we explore, the history of copyright law and technology teaches that focusing on the incidental features of a new technological tool rather than addressing the core technology itself may yield inefficient results.
To the extent that copyright law cannot be used to ban unauthorized minting of NFT, we discuss other legal mechanisms that may be used towards this end, including buyers’ contract and tort law claims (based on misrepresentation), moral rights, and misappropriation. We also discuss enforcement options and the role of platforms in regulating this space.
85. Unver, Mehmet (University of Hertfordshire) and Roddeck, Lezel (Bucerius Law School). Artificial Intelligence in the Legal Sector: Ethics on the Spot
Keywords: Artificial Intelligence, AI ethics, legal ethics, professional conduct rules, legal sector, transparency, accountability, fairness
Abstract. Artificial Intelligence (AI), while transforming software systems and products that are used by legal practitioners, raises several ethical issues concerning transparency, fairness, and accountability of legal tech tools powered by AI. Such issues concern whether professional conduct duties for lawyers respond to the overall ethical challenges and to what extent legal ethics need to interact with AI ethics. In response to these questions, this study upholds a broader viewpoint through which not only the professional conduct duties but also the regulatory landscape and principles on AI ethics is assessed. Furthermore, the AI life cycle including the stages of design, development, and deployment is discussed with a view to eliciting a holistic ethical viewpoint and strategy applicable for the legal sector. Thereby, the interaction between the AI life cycle and the lawyers’ use of AI is put into inquiry within the meaning of how to cope with the ethical challenges overall. It is concluded that the current professional conduct duties and ethical responsibilities, need to be reviewed and revised to materialise ethical AI during its life cycle and beyond. What’s more, both lawyers and AI stakeholders, e.g., legal tech providers, need to cross their boundaries, engage with ethical issues from a holistic viewpoint, and collaborate with each other. Ultimately, for a fruitful collaboration that can leverage ethical AI for the legal sector, regulatory bodies such as SRA should take the leading role.
86. Urquhart, Lachlan (University of Edinburgh) and Boniface, Christopher (University of Edinburgh). Legal Aspects of the Right to Repair for Consumer Internet of Things
Keywords: internet of things, right to repair, sustainability, cybersecurity, data privacy
Abstract. The Internet of Things (IoT) involves everyday products like televisions, toys, mobile phones and even refrigerators becoming increasingly automated and interconnected. Whilst IoT ubiquity in day-to-day life allegedly brings benefits of convenience, security, or even entertainment, it also raises questions around the maintenance and long-term redundancy of such devices. The e-Waste generated, for example, stems from planned obsolescence in their design (as part of update cycles), and consumers inability to repurpose, customise, or maintain systems. Companies often make it prohibitively hard to access spare parts and channel repair through expensive professional repair facilities. Similarly, the data driven AI processes underpinning IoT systems create a significant carbon footprint. The scale, and lack of planning around the footprint of such devices means there are significant emerging environmental consequences. Support for the “right to repair” has grown, and new legal provisions in the EU and around the world aim to drive more sustainable design and foster a more circular, sustainable economy. This includes laws like the EU Ecodesign rules requiring manufacturers to continue to provide spare parts, and emerging IOT security rules around update timelines, such as the EU Cyber Resilience Act and UK Product Security and Telecommunications Infrastructure Bill. Concurrently, social movements advocating for consumer autonomy have also grown, clashing with dominant economic forces like Apple and John Deere. Repair cafes have emerged in cities around the world, as grass roots means for self-directed or communal repair.
In the UK, we are running a 2-year EPSRC funded interdisciplinary research project called Fixing The Future. Here we are investigating the role the of an emerging “right to repair” in changing the socio-technical landscape around consumer IoT. We are bringing together academic perspectives from law, design, ethics, and Human-Computer Interaction (HCI) to explore how to realise the right to repair for consumer IoT. In this talk we will consider the legal problem space around repairability by reflecting on issues such as planned obsolescence, data management, cybersecurity, and sustainability of devices, and the wider ecosystem. We will consider socio-legal, ethical, and design developments that are taking place to either support or oppose repair of IoT. We conclude by outlining our interdisciplinary roadmap that envisions how to build more ethical, equitable, legally compliant, secure, and sustainable IoT systems.
87. Van Schendel, Sascha (Tilburg University). The Regulation of AI in Criminal Justice: Building a Bridge between Different Legal Frameworks
Keywords: AI, Criminal law, Data Protection law, EncroChat, Predictive policing, Fundamental Rights
Abstract. We have a myriad of legal frameworks stemming from different fundamental rights (for example different privacy and data protection provisions as well as non-discrimination principles, but also proposals such as the AI Act) that regulate the use of AI, and legal instruments differing per context in which AI is used. One sector in which AI is playing an increasingly prominent role is the criminal justice sector: for example through the use of algorithms for the compilation of risk profiles, software for automated searches of bulkdata, AI tools for predictive mapping, and automated assessment programs for recidivism risk. In this contribution I argue that in the criminal justice sector a problem arises because the regulation of such AI applications is governed by fragmented pieces of legislation that are not able to keep up with new technological developments and therefore not offering adequate fundamental rights protection. In this contribution I focus on the Dutch criminal justice sector to make the discussion more concrete. First I discuss various Dutch AI applications and how they challenge traditional assumptions of legislation in the criminal justice sector. Next, I use the example of bulk data collection of EncroChat phones and automated searches of said data by the Dutch police and the subsequent case law, to illustrate that there is a gap in fundamental rights protection created between criminal procedural law and data protection law. The aim of the contribution is, by using the examples from the Dutch context, to illustrate the difficulties that we face with such a wide range of legal instruments aiming to protect fundamental rights. To mitigate some of the problems with the fragmented regulatory framework, I propose ways in which we can integrate data protection norms and criminal procedural norms more in the Dutch context, to serve as an example and starting point for further discussions in bridging the gaps between different regulatory frameworks when it comes to AI its fundamental rights challenges.
88. Van ‘t Schip, Mattis (Radboud University). The Regulation of Supply Chain Cybersecurity in the EU NIS2 Directive: A Novel Approach to Cybersecurity for the Internet of Things
Keywords: Cybersecurity, Internet of Things, Supply chain cybersecurity, NIS2 Directive, General Data Protection Regulation, Radio Equipment Directive
Abstract. An increasing number of actors design, develop, and produce modern ICT products in a collaborative network, a ‘supply chain’. From a cybersecurity perspective, each actor brings new vulnerabilities for the entire chain and, in turn, the ICT product the chain creates (e.g., with various employees having access to data storage). This is a problem that should be addressed by ‘supply chain cybersecurity’, a type of cybersecurity policy that prevents disruption of a supply chain’s digital assets by internal or external actors.
Existing cybersecurity legislation (e.g., the General Data Protection Regulation and the Radio Equipment Directive) does not contain clear supply chain cybersecurity provisions. In contrast, the recently adopted EU Network and Information Systems (NIS2) Directive introduces rules on supply chain cybersecurity for the network and information systems (e.g., Internet of Things devices) of entities in critical sectors (e.g., energy providers, hospitals). Internet of Things devices combine hardware (e.g., a watch) and software (e.g., an operating system) and, consequently, many actors belong to the supply chain of these devices. The provisions in NIS2 could thus support the overall cybersecurity of Internet of Things devices in a novel way. This paper analyses, therefore, to what extent the supply chain cybersecurity rules in the NIS2 Directive offer a meaningful contribution to the cybersecurity of IoT devices, compared to existing cybersecurity legislation.
Supply chain cybersecurity is a developing field; it originates from ‘cyber supply chain risk management’. This latter field of studies emphasizes three main elements that are pivotal to cyber risk management for supply chains: 1) cyber resilience, the capacity of the supply chain to recover from attacks, 2) collaborative cybersecurity investments, which support the entire supply chain in achieving equal levels of cybersecurity, and 3) standardization, which offers guidelines that companies can benefit from to ensure homogenous cybersecurity measures. The paper shows that the NIS2 Directive aligns closely with these risk management elements. Therefore, the Directive, at first glance, offers a proper response to the supply chain cybersecurity problems of the Internet of Things-producing industry.
However, beyond risk management, the Directive’s supply chain cybersecurity provisions are a missed opportunity. First, the Directive requires the users of IoT systems in critical sectors (e.g., a hospital that employs smart watches) to take supply chain cybersecurity measures. This approach does not fit with broader supply chain management literature, where governance responsibilities are attributed to the “focal company”, the governing company within a supply chain (e.g., Apple for the iPhone). The focal company knows their supply chain well and can thus take more effective cybersecurity measures than the end user of their product. Second, the Directive does not sufficiently define the exact measures, actors, and processes involved in its supply chain cybersecurity requirements; a clearer vision is required in this evolving field.
In conclusion, the NIS2 Directive offers a promising legal introduction to supply chain cybersecurity, but the current approach is built on a flawed understanding of this intricate topic.
89. Vellinga, Nynke (University of Groningen). Rethinking Compensation in Light of the Development of AI
Keywords: AI, Compensation, Liability
Abstract. The development of AI will put existing liability frameworks to the test. The opacity, autonomy and complexity of AI systems can stand in the way of a fair and efficient allocation of risk and loss. The European Commission (EC) has recognized this and has addressed these matters in two proposals for two directives: the AI Liability Directive and a new Product Liability Directive. Both Directives address information asymmetries between parties in a liability claim by providing new rules on the burden of proof. In addition, the proposed new Product Liability Directive has been brought ‘up-to-date’ by explicitly incorporating new technical developments (self-learning capabilities, cybersecurity, etc.) and by clarifying software is a product within the meaning of this proposed Directive.
Another noteworthy change to the product liability framework is the proposal not to offer an opt-out from the development risk defence to the Member States. Under the current Product Liability Directive (Directive 85/374/EEC), Member States can opt-out from this defence regarding specific or all products (art. 15(1)). The proposed new Product Liability Directive no longer provides for this opt-out, thereby making the development risk defence relevant to all Member States. This means that if a manufacturer proves that ‘the objective state of scientific and technical knowledge at the time when the product was placed on the market, put into service or in the period in which the product was within the manufacturer’s control was not such that the defectiveness could be discovered’, the manufacturer is not liable for the damage caused by his defective product. Although this could support innovation, it could have grave consequences for the injured party. Depending on his specific circumstances, and especially insurance, the injured party might not receive compensation for damage suffered by a defective product. So, even though the EC addresses difficulties concerning liability for damage caused by AI systems, the injured party might still not receive compensation for the damage caused by such a defective AI system.
This gives rise to the question of whether the legal framework for compensation should be revised more boldly to ensure compensation for the damage suffered by the injured party. This contribution will therefore explore such a bold approach by delving into compensation funds. More specifically, this contribution will examine how a compensation fund for damage caused by AI systems can be designed as well as what it boundaries could or should be and what its benefits could be. Could a compensation fund for damage caused by AI systems provide a useful new approach?
90. Verdoodt, Valerie (Ghent University) and Lievens, Eva (Ghent University). The EU Approach to Safeguard Children’s Rights on Video-Sharing Platforms: Jigsaw or Maze?
Keywords: Children’s rights, Video-sharing platforms, Platform regulation, AVMS Directive, Digital Services Act, Proposal for an AI Act, Age-appropriate design code
Abstract. Children are avid consumers of audiovisual media content. They watch programmes, series, films and short videos on a variety of devices and through different channels, such as television broadcasting, video-on-demand services (e.g. Netflix, Disney+) and on the internet (EU Kids Online, 2020). Online audiovisual content is available on websites, gaming platforms and video-sharing platforms, of which YouTube is the most popular one. Video-sharing platforms offer a wealth of child-friendly or child-appropriate content as well as content that children find amusing or interesting. However, these two types of content do not necessarily always overlap. Moreover, there is also content available which – depending on the age of the child – might be considered inappropriate or potentially harmful. This raises important questions from a children’s rights perspective, and in particular about the role of platforms in this context.
EU lawmakers have not been idle in recent years when it comes to platform regulation. Interestingly, legislative proposals dealing with video-sharing platforms increasingly make explicit reference to children and their rights under the UN Convention on the Rights of the Child. However, the question is whether these proposals are limited to acknowledging that certain services and practices may cause serious to unacceptable risks or whether they have the potential to ensure that children’s rights are effectively implemented. To answer this question, this article explores the responsibilities of video-sharing platforms towards children imposed by existing, recently adopted and proposed EU legislation. The instruments we investigate are the Audiovisual Media Services Directive, the Digital Services Act, the proposal for an AI Act, and the Age-appropriate design code that was announced in the EU Better Internet for Kids + Strategy. Based on a legal study of policy documents, legislation, and scholarship, this contribution investigates to what extent this legislative framework helps to safeguard children’s rights – in particular their rights to play and freedom of expression, and their rights to protection from harmful content and from commercial exploitation. The article focuses on the regulatory framework of the European Union, and draws on work of the United Nations and Council of Europe in relation to children’s rights.
91. Wang, Xiaoren (University of Dundee); Heald, Paul (University of Illinois) and Ge, Weihao (University of Illinois). Creatively Misinformed: Mining Social Media to Capture Internet Creators and Users’ Misunderstanding of Intellectual Property Registration System
Keywords: misinformation, intellectual property, social media, data mining, empirical research
Abstract. Intellectual property (IP) law is complicated, but engaging formal legal help is costly. The internet, therefore, becomes a major source of cheap regulatory information for the creative industries. In the absence of formal counsel, media outlets like Twitter and Reddit have become important sources of information about the law and important sources of misinformation. For example, law firm posts suggest that misconceptions about the need for and benefits of copyright, patent, and trademark registration abound.
Misconceptions about the copyright registration system may contribute to the proliferation on-line scams targeting new authors. The failure to understand the patent registration system may result in the loss of protection for an invention. Businesses that misunderstand the scope of trademark registration may make overly risk averse or risk preferential branding choices.
Although Creatively Misinformed focuses on the UK creative industries, we will mine posts on Twitter and Reddit for common misconceptions of both the UK and EU systems. After all, UK inventors and brands consider the EU to be a key market, and UK creators of copyrighted works (especially musicians and artists) dream of selling in the EU.
After extensively mining data from Reddit and Twitter, we expect to find a gap between the objectives of the IP registration system and how it is understood “on the ground.” We hypothesize that the gap is partially due to the public misconceptions on the registration system. Without a more precise picture of IP misconceptions, regulators cannot easily respond. We hope to inform regulators and advise them how to reduce the most troublesome of IP misconceptions. Ultimately, our research could be used to improve the quality of media campaigns and public strategies. Finally, we hope our date will help IP scholars increase the relevance of their interventions.
92. Williams, Elin (University of Liverpool, PhD Candidate in Law/Edge Hill University, Visiting Lecturer in Law). Money Laundering Through Cryptocurrency Mixers: Exploiting Existing Weaknesses in the Anti-Money Laundering Regime
Keywords: Money Laundering, Anti-Money Laundering Regime, Cryptocurrencies, Mixers, Effectiveness, Cybercrime, Serious Organised Crime, Future Technologies, Law, Regulation
Abstract. Money launderers are continually seeking new, technologically advanced typologies, such as cryptocurrency mixers, to fund and profit from serious organised crime. More specifically, the process of ‘mixing’ obfuscates money laundering by incorporating both illicit and legitimate transactions taking place simultaneously; additional instructions can also be given to launder in smaller quantities or even at a delayed rate. In effect, ‘mixing’ represents a more technologically sophisticated form of ‘smurfing’, a traditional laundering method involving the shuffling and redistribution of funds to obscure their origin and audit trail.
This paper will provide a novel analysis of how criminals, who launder their ill-gotten gains through cryptocurrency mixers, can exploit existing weaknesses in the anti-money laundering regime. First, it will provide a contextual background by defining what cryptocurrency mixers are, whilst also outlining their opportunities and money laundering risks. Second, it will examine how mixers operate, and how they may facilitate the money laundering process, thus aiding criminals in evading detection. Third, it will advance on these findings by analysing the application of law and regulation in the new technological landscape and identifying areas of weakness in the anti- money laundering regime open to exploitation.
Finally, this paper will conclude by acknowledging that efforts have been made to identify new typologies and adapt or introduce cyber-applicable legislation. Nonetheless, it argues that the priority should be remedying weaknesses in the existing anti-money laundering regime, which can be achieved by revisiting the orthodoxy of ‘effectiveness’. Making such amendments will allow for a truly robust regime, capable of overseeing and overcoming challenges posed by emerging and future typologies, including and beyond cryptocurrency mixers.
93. Wolters, Pieter (Radboud University). The Influence of the Data Act on the Shifting Balance between Data Protection and the Free Movement of Data
Keywords: Data access, Data Act, Data Governance Act, Data protection law, Data sharing, European strategy for data
Abstract. The relationship between data and European law is characterised by two objectives that need to be balanced: data protection and the free movement of data. In recent years, the emphasis has been on data protection. Strict interpretations of data protection law by the European Data Protection Board and the Court of Justice have made it harder to share data. In contrast, the European Commission is particularly concerned with strengthening the free movement of data. Under the Digital Single Market Strategy (2015) and the European strategy for data (2020), it has proposed and announced various directives and regulations (or ‘acts’) to facilitate data sharing. Within the European strategy for data, the Data Act, presented on 23 February 2022, is the most ambitious proposal. It imposes harmonised rules for sharing data and rights of access to data for both users and governments.
This article analyses the influence of the Data Act on the balance between data protection and the free movement of data. It argues that the Data Act will not significantly shift the balance by itself. Like the Data Governance Act, the Data Act primarily contains frameworks that apply when data are shared. However, the obligations and incentives to actually use these frameworks remain relatively limited. The strict rules of the GDPR are not affected.
This does not mean that the Data Act cannot play a meaningful role in the future. The Data Act is part of the broader European strategy for data. The extensive frameworks are not just meant for a few specific obligations to share data. They provide a foundation. It is up to other instruments such as the European data spaces to build on this foundation by creating new rights of access to data and corresponding obligations to share data. The impact of the Data Act on the balance between data protection and the free movement of data is therefore not set in stone. It depends on the success and further elaboration and implementation of the European strategy for data.
94. Xiao, Leon Y (IT University of Copenhagen; QMUL; York; Stanford). Beneath the Label: Poor Compliance with ESRB, PEGI, and IARC Industry Self-Regulation Requiring Loot Box Presence Warning Labels by Video Game Companies
Keywords: Loot boxes, Video games, Gambling, Videogaming, Consumer protection, Industry self-regulation
Abstract. Loot boxes in video games are bought to obtain random rewards. Concerns have been raised about loot boxes’ similarities with gambling and their potential harms (e.g., overspending). The ESRB (Entertainment Software Rating Board) and PEGI (Pan-European Game Information) are the self-regulatory video game age rating and content moderation organisations for North America and Europe, respectively. Recognising players’ and parents’ concerns, in mid-2020, the ESRB and PEGI announced that games containing loot boxes will be marked by a new label stating ‘In-Game Purchases (Includes Random Items)’. The same was also adopted by the International Age Rating Coalition (IARC) for games on digital storefronts, such as the Google Play Store globally. The label is intended to provide more information to consumers and allow them to make more informed purchasing decisions. This measure is not legally-binding and has been adopted as industry self-regulation. Previous research has suggested that industry self-regulation might not be effectively complied with due to conflicting commercial interests, even though this approach is often adopted for the regulation of new technology. Poor compliance with the ESRB’s, PEGI’s, and IARC’s loot box presence warning label was empirically demonstrated in two ways. Firstly, 60.6% of all games so labelled by either the ESRB or PEGI (or 25.7% using a more equitable methodology) were not labelled by the other, following analysis of the rating records of both organisations. Such inconsistencies were most likely caused by one age rating organisation failing to accurately identify loot box presence. Secondly, 71.0% of popular games containing loot boxes on the Google Play Store (whose age rating system is regulated through IARC) did not display the label and were therefore non-compliant with the self-regulation. These games were analysed through gameplay to ensure that they do indeed contain loot boxes. At present, consumers and parents cannot rely on this self-regulatory measure to provide accurate information as to loot box presence, particularly in relation to mobile games. The mere existence of this measure cannot be used to justify the non-regulation of loot boxes by governments, given the poor compliance and doubtful efficacy (even if when complied with satisfactorily). Improvements to the existing age rating systems are proposed. An official response by PEGI regarding this study has been received: this will be explored alongside the empirical findings. Official responses from the ESRB and IARC remain forthcoming. This empirical study fully accords with the principles of open science and is being conducted as a registered report, meaning that the methodology was peer-reviewed and approved prior to data collection. A link to the pre-registered protocol is available upon request and is not provided here to preserve anonymity.
95. Yardimci, Gizem, Aphra Kerr and David Mangan (Maynooth University). Protecting Elections in the Digital Age: Examining the Potential Regulatory Impact of the EU’s Draft AI Act on Political Bots
Keywords: political bots, elections, AI Act, computational politics, freedom of speech and information
Abstract. AI-enabled internet trolls have been used to manipulate democratic processes and they have gained significant attention in recent years. However, electoral fraud including electoral manipulation is not a new phenomenon. Electoral fraud occurred prior to the internet, but it was subject to limitations (including geographic, legal and in terms of population reach) including those who could not access the internet. The internet has not only expanded the scope and effect of such manipulations through the use of internet trolls, but it has also given advertisers the ability to use datafication, platforms and algorithms to personalise political messages to groups of voters, further increasing the potential for manipulation.
In the current digital environment, bots play a significant role in shaping public opinion via social media and they can be utilized for manipulative purposes. Bots, or robots, have been existed in the digital world since the creation of ELIZA, a chatbot designed in 1966 by Joseph Weizenbaum. Today, Twitterbots are perhaps the most known type of bot and have been known for their impact on political elections. Bots, which were initially deployed by the Chinese government and later adopted by Russia, were designed to spread misinformation, distract from important issues, and promote pro-candidate messages. Political bots were used in the Brexit referendum and the 2016 US election. In the literature, the effects of political bots on elections have been studied, although this effect has not been proven certain.
This presentation will examine the draft Artificial Intelligence Act (AI Act) for the European Union as one emerging legal approach, to regulating political bots. Using qualitative document analysis the paper examines how the AI Act proposes to regulate political bots and protect the integrity of democratic processes. As the Act is still being drafted, an assessment of potential measures for responsible use of political bots will be provided. Therefore, the presentation will evaluate the potential strengths and weaknesses of the proposed legislation in terms of protecting elections within the AI Act, and provide insights into ensuring the integrity of democratic processes.
96. Zardiashvili, Lexo (Leiden University). The End of Online Behavioral Advertising
Keywords: Online behavioral advertising, Online platforms, European Union, Regulation, Autonomy, Dignity, Manipulation
Abstract. Online behavioral advertising (OBA) is the central revenue stream of the online platforms that control the gateways to the internet for business users and consumers alike. In 2022, this practice generated more than $500 billion in revenues and funded most of the content and services on the internet for which consumers do not pay money. Nevertheless, this practice is associated with many harms and systemic threats. In essence, OBA has ascribed a potential for undermining consumer privacy and autonomy through the manipulative design of online interfaces and manipulative personalization. These potential harms also speak of more systemic threats, such as to democracy and the rule of law. In the EU, OBA is regulated in multiple fields of law, such as personal data protection, privacy, consumer protection, and competition law. Nevertheless, the core of this practice continued to operate unhindered to create the wealthiest companies in history. For some observers, the recent developments in European adjudication and the adoption of new legislation within the Digital Services Framework signal the end of the practice. This article analyzes these developments and explains the extent to which EU legislation limits online behavioral advertising.
97. Zucchetti Filho, Pedro (Australian National University). Facial Recognition in Brazil: Current Scenario and Future Developments
Keywords: Facial Recognition Technology, Brazilian Law Enforcement, Regulation, Human Rights
Abstract. Facial Recognition Technology (FRT) has been accompanied by an increasing wave of enthusiasm not only by its developers but especially by its deployers. This fascination is easily understood. Which government agency would not want to acquire this futuristic technological device when its developers claim it can produce faster ‘positive’ outcomes, such as more rapidly identifying someone suspected of having committed a crime? If efficiency and effectiveness are the key goals, FRT seemingly fits the bill, at least at first glance. This optimism however, has not been borne out in reality, especially in Brazil. Brazilian law enforcement agencies have been deploying FRT for a while and Brazilian state agencies have spent enormous amounts of money to purchase the equipment.
However, there is a significant legal gap concerning its deployment, and the legal basis currently indicated by state agencies for its application does not hold constitutionally. Moreover, Brazilian law enforcement agencies are not following established international criteria and guidelines for a secure and human rights compliant deployment, which deepens social disadvantages experienced by minority groups. Brazil has one of the highest incarcerated populations in the world. FRT deployment by Brazilian law enforcement agencies has proved discriminatory and biased toward specific population segments as well as minority ethnicities. This paper analyzes the current socioeconomic implications related to FRT deployment by Brazilian state agencies, as well as the legal consequences of the failure of these agencies to comply with international prerequisites established for the safer use of the tool.