Skip to content Skip to footer

Lyria Bennett Moses: “The legal eye on technology”

The Amsterdam Law & Technology Institute’s team is inviting external faculty members to publish guest articles in the ALTI Forum. Here is a contribution authored by Lyria Bennett Moses, Director of the Allens Hub for Technology, Law and Innovation and a Professor in the Faculty of Law and Justice at UNSW Sydney.

***

What does a legal eye see when it looks at technology? From time to time, technology throws up something of interest to policymakers, lawyers, and legal scholars. “Artificial intelligence” has been a recent topic, but one can go back to “nanotechnology”, “cloning”, or “cyberspace”. None of these is an unambiguous “thing in the world” to which one can point; rather each term represents a collection of objects and practices that get bundled together at a particular point in time. Meanwhile, technology throws up many new things of seemingly little interest to lawyers at least outside of a specialist few.

In this essay, I discuss what in technology attracts the legal eye and what is it that gets seen. My interest in legal observation stems from its impact. What gets seen flows through to how problems are conceptualised and, ultimately, substantive proposals for new law. However, legal perceptions of technology are not always discussed explicitly in policy debates despite their impact on the timing and scope of such debates. This is notwithstanding the fact that they often shape assumptions behind what gets proposed, frame the questions that get asked, and thus mould policy work and law reform. Here, I argue that making our perceptions explicit will improve legal thinking at the technological frontier.

Ways of seeing

From reading the literature, one can observe both different ways of seeing technology and different views on how lawyers and legal scholars ought to examine it. My goal here is to focus on the first of these rather than the second.

Some see technology as explorers might – new horizons to draw on our maps. Beebe made this point in his essay on early space law scholarship, through which legal minds grappled with the fear of Sputnik and sought solace in imagined off-world transactions and the constitutional implications of true aliens on US soil. Exploration is often linked to colonialism – the continuing authority of law is asserted through such projections into the future.

A similar mindset pervades scholarship that seeks to “regulate” technology, placing it under the control or influence of legal rules, government strategies or ethical constraints. Here, the first step is one of definitions that establishes the scope of what is proposed. Terms that were once used to bracket together diverse practices, whether for marketing purposes, grant funding or problem-solving efforts, are reimagined as well-defined regulatory objects. These articulations of the technological thing of concern are debated in legal literature and policy reports, eventually becoming fixed in law. Nanomaterials become (at least in Europe) “an insoluble or biopersistant and internationally manufactured material with one or more external dimensions, or an internal structure, on the scale of 1 to 100 nm.” The precise shape of artificial intelligence is still under discussion. Thus are particular outlines drawn around technological objects and practices, and brought within our control.

While some legal minds approach technology as something to be mapped, scoped and regulated, others prefer to close their eyes altogether. A call for “technological neutrality” in the formulation of law seeks to influence the world without regard to technological mechanisms enabling activities of concern. While legal rules can never completely abstract away from the socio-technical landscape on which they operate, the push here is to generalise away from the technical specifics. What is seen by those taking this approach (at the extreme) is an abstraction away from the technological means through which the activities to be prohibited, promoted, or guided are enabled.

In my own scholarship, I have focussed not on the technological thing but on the ways in which it changes. In other words, I have looked at technology not as technology, but as the fastest-changing component of the landscape, the thing requiring the most urgent response.

This is not an exhaustive list; my point is rather that what gets seen by lawyers and legal scholars when they look at technology impacts the kinds of recommendations they make. Seeing technology within an approach focussed on gradual influence and steering. Focusing on technology or technological change as the source of legal concern shapes the direction of law reform in different ways. Lawyers’ perceptions of technology are not mere strands of thought, they influence the world.

An example of the impact of our perceptions: Visions of artificial intelligence and algorithms

To appreciate the importance of the timing and content of legal perceptions, consider the example of “artificial intelligence”. The history of artificial intelligence as a scientific project is often traced back to the mid-1950s and a Dartmouth summer research project based on “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Investment in this field has oscillated over time, with recent successes in machine learning making it increasingly popular so that it is now frequently listed among the technologies essential for various countries’ national interests. Marketers eventually latched onto it so that it is used to promote much of what might earlier have been called computer programs or software.

Gradually, the idea of artificial intelligence and algorithms (as opposed to machines, computers, programs, big data and other no longer fashionable terms that have overlapped with these new categories) have attracted the legal eye. Those in the “define then regulate” camp have drawn their own boundaries around these and related categories often quite differently from those working in computer science and related fields. The idea of algorithm in the mind of Mohammed ibn-Musa al-Khwarizmi, after whom the word is named, is quite different from the Council of Europe’s 2016 definition that focuses on data processing or New Zealand’s that links it to machine learning. The definitions of artificial intelligence or AI systems that one finds in the OECD or in Europe’s proposed regulation are less to do with simulated intelligence that drove the Dartmouth project, earlier legal and policy analysis or Russell and Norvig’s idea of rational agents, and more to do with the nature and influence of the outputs.

Not only do legal visions of terms like ‘artificial intelligence; evolve, but the desire to make them the objects of regulation is far distant from the emergence of the practices that the categories described. The timing of legal intervention is based less on conscious balancing in line with unknowability, fixity and pace of a technology’s evolution and diffusion, as articulated by Collingwood and Bernstein, and more on when it attracts attention. Once it becomes an object for legal and policy discussion, there is no consistency in what it is that is being discussed. What is “artificial intelligence” or an “algorithm” is very much in the eye of its beholders, often distant from technical understandings, varying according to place and time. This ultimately affects the scope of what is being proposed by way of new law or regulation. The timing and content of the legal response to technology hinge on what gets seen at what time.

Blind spots

What I have hoped to demonstrate above is that when and how technology is seen by those observing it from a legal perspective impacts what gets proposed and, ultimately, on the form of legal rules and regulatory action. Here, I will focus on the impact of inevitably limited visions.

Blind spots can be seen in European Commission’s approach to artificial intelligence. The European Commission was invited to “put forward a European approach to artificial intelligence”. Its eye was directed not to something physical, but to a category that only exists because as an object of the proposed regulation to come. It had to map this thing, to define artificial intelligence, in order for the policy project to be made real. The eventual draft Regulation described what the Commission cartographers saw – a broad and growing category, and also what they wanted to do about it. Where it did distinguish among practices, it did so largely based on sector rather than technical mechanisms. The Commission refracted all sorts of policy objectives through its vision of AI – subliminal communications, exploitation of vulnerable groups, profiling, biometric identification, and even risk management – all of these were mapped exclusively onto the terrain of artificial intelligence, as the Commission viewed it. Where artificial intelligence is involved, don’t use subliminal techniques. Where artificial intelligence is involved, don’t exploit the vulnerabilities of persons with a disability. The Commission builds policy solely around the artificial category of “artificial intelligence”, as it imagines it.

Blind spots as the result of limited vision are not, however, confined to observers that focus on describing and mapping a technological domain. Those looking away from technology, who see abstractions around technologically-mediated activity, have their own blind spots.

This can be seen in Australia’s response to changes in communication networks. In that case, policymakers are seeking to rise above the tide of technology and formulate surveillance laws that are “technology-neutral”. The hope is that this will avoid constant legislative revisions each time citizens change how they communicate. But gazing at a horizon beyond technological landscapes renders invisible policy choices, just as the European Commission’s approach to artificial intelligence limited their policy vision. In Australia’s case, the problem is the implicit creation of a default of surveillance power, even where the consequences might not be known. While telling stories about technologies within the range of its vision (say, communication through messaging apps on mobile devices), by abstracting away from technology, they potentially include within the surveillance net (at a futurist extreme) direct mind-to-mind communications. By abstracting away from the field of view, to legislate ‘beyond’ technology via technological neutrality is not neutral, but a policy choice towards a policy default, in this case a default of surveillance.

In case readers think that I am arguing for a particular way of seeing technology, it is worth pointing out the blind spots of my own approach that focuses on technological change. While I believe this has been fruitful (of course!), it is still a particular way of seeing that impacts my proposals for law’s response. For example, it tends to be blind to law reform required, not as a result of anything new, but because of what we haven’t really seen for a long time.

Consciousness of vision

So if I am not prescribing my lens on technology, how should we respond to the limitations of legal vision? What I propose is a combination of awareness and diversity. Those exploring legal implications of technology or prescribing legal responses to the challenges it brings should consider what it is that they see and how they view it. ? Is it just a question of a private interest to block or favour particular sectors influencing public policy? How is my imagination of technological objects and practices limited by the categories I draw on and how I map them onto the world? These questions apply even to those attracted to ideas of technological neutrality – visions of technology will still impact on the appropriateness of rules formulated to abstract away from technical specifics.

Within policy processes, when we move from individuals to institutions, we should embrace diversity of vision in order to test proposals for law reform or regulatory action. Openness to different perceptions of the technology being considered can help align the purpose and scope of specific proposals. We should avoid wearing blinkers too early, framing the issue around a concept like “artificial intelligence” before we have a chance to consider whether this is what truly ought to attract our gaze. Diversity of vision is of course only one among any forms of diversity, but they are all linked because what gets seen is linked to who people are, the communities in which they live, and what matters in the lives of themselves and their networks. So diversity in its broadest sense will feed into the ability of lawyers and policymakers as a whole to develop law that operates effectively at the technological frontier.

Lyria Bennett Moses

***

Citation: Lyria Bennett Moses, The legal eye on technology, ALTI Forum, January 31, 2022.

Amsterdam Law & Technology Institute
VU Faculty of Law
De Boelelaan 1077, 1081 HV Amsterdam