Skip to content Skip to footer

Anna Schröder: “‘Real-time’ versus ‘post’ remote biometric identification systems under the AI Act”

The Amsterdam Law & Technology Institute’s team is inviting young scholars guest articles in ALTI Young Voices. Here is the first contribution authored by Anna Schröder.

***

Introduction

In April 2021, the European Commission (EC) published a proposal for a new regulation on artificial intelligence. The regulation is referred to as the Artificial Intelligence Act (AIA) and will form the world’s first ever legal framework on AI. The draft AIA follows a risk-based approach, on the basis of which AI-systems can be categorised into one of four risk categories – minimal, limited, high and unacceptable risk – and be subjected to specific rules accordingly.

A number of stand-alone AI systems are explicitly listed as ‘high-risk’ in Annex III to the draft AIA. At the top of this list, under point 1(a), are “AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons”. Remote biometric identification systems are used to identify natural persons at a distance on the basis of comparing a person’s biometric data with a reference database. An obvious but nonetheless controversial example of such are video surveillance systems that make use of facial recognition technology, as these enable the automatic detection and unique identification of a person by using that person’s facial image.

This contribution focuses on the distinction made in the draft AIA between ‘real-time’ and ‘post’ remote biometric identification systems. The former are subject to stricter rules than the latter when used for law enforcement purposes in the public space. According to the European Commission, these diverging rules are based on a different risk level to fundamental rights, but whether this proclaimed difference in risk is accurate has been disputed by experts. The central question in this contribution is therefore: can the proposed distinction between ‘real-time’ and ‘post’ remote biometric identification systems under the draft AIA be explained on the basis of a difference in risks to fundamental rights?

In order to answer this question, the existing legal framework for the use of remote biometric identification systems by law enforcement authorities and the relevant proposed additions to this framework under the draft AIA are laid out in the first two sections respectively. In the third section the justification behind the distinction is discussed from a fundamental rights perspective.

  1. Current framework: the Law Enforcement Directive

While the General Data Protection Regulation (GDPR) provides the basic framework for the processing of personal data in the EU, its material scope excludes the domain of law enforcement under Article 2(2)d. This gap is filled by a piece of legislation that was drafted and adopted in parallel with the GDPR: the Law Enforcement Directive (LED). While the scope of the LED and the GDPR differ, their rules on data protection are highly similar. When assessing the legitimacy of police use of remote biometric identification systems, which necessarily involves the processing of personal data, the LED is thus the applicable legal framework.

The type of data that is crucial to the functioning of biometric identification systems, such as facial recognition technology, is biometric data. Under Article 3(13) LED, biometric data is addressed as a special category of personal data and defined as “resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data”.

The processing of special categories of personal data is subject to an additional level of protection under Article 10 LED. Accordingly, it is only allowed if i) strictly necessary, ii) when it is subject to appropriate safeguards for the data subject’s rights and freedoms and iii) only in a limited number of circumstances, namely a) where the processing is authorised by EU or Member State law, b) to protect a person’s vital interests or c) where it relates to data that are manifestly made public by the data subject.

While currently the processing of biometric data for the use of remote biometric identification systems by law enforcement authorities is thus regulated by the LED, additional rules on such use of biometrics are proposed in the draft AIA.

  1. ‘Real-time’ or ‘post’ identification? A distinction under the draft Artificial Intelligence Act

The draft AIA follows a risk-based approach whereby AI systems are regulated on the basis of four categories of risk: minimal, limited, high or unacceptable. According to the EC, such a “tailored regulatory response is needed”, because certain specific features of AI can create high risks for which existing legislation, including the LED, is insufficient. Through the risk-based approach, the draft AIA is designed to intervene in a proportionate manner. Hence, its regulatory focus lies on AI systems that generate high or unacceptable risks.

Any AI system intended to be used for the remote biometric identification of natural persons is considered ‘high-risk’ under point 1(a) of Annex III of the draft. For such a high-risk AI system to be permitted on the EU internal market, the draft lays down a set of core requirements and a third party conformity assessment that would need to be met.

On top of the requirements that apply to remote biometric identification systems as high-risk AI systems in general, further specific restrictions apply when such identification systems are used in ‘real-time’ and in the publicly accessible space for the purpose of law enforcement. When the use of remote biometric identification systems does not take place in ‘real-time’, the draft AIA speaks of ‘post’ use. I will now go into the nature of the distinction between ‘real-time’ and ‘post’ use and introduce their respective regulatory frameworks in the context of law enforcement in the publicly accessible space.

2.1 ‘Real-time’ remote biometric identification systems

For remote biometric identification systems to be classified as ‘real-time’ systems, Article 3(37) of the draft AIA requires that “the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.” Hence, as mentioned in Recital 8 of the draft AIA, this involves “the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality”.

Under Article 5(1)(d), the draft AIA prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except in three exhaustively listed situations, where – as explained in Recital 19 – “the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks.” In short, the three situations listed regard (i) the targeted search for potential victims of crime, (ii) the prevention of a terrorist attack or of a “specific, substantial and imminent” threat to human lives or safety and (iii) the search for suspects or perpetrators of certain criminal offences punishable by a custodial sentence for a maximum period of at least three years.

For each use of ‘real-time’ remote biometric identification systems in one of the exceptional situations, a number of further criteria must be met in order to ensure that the systems are used in a responsible and proportionate manner. Among other criteria, each use should be subject to “temporal, geographic and personal limitations” (see Article 5(2) draft AIA) as well as to prior authorisation by a Member State’s judicial authority or independent administrative authority (unless in a “duly justified situation of urgency”, see Article 5(3) draft AIA).

In any case, obtaining the necessary authorisation is only possible in a Member State’s territory to the extent that this Member State provides for such a possibility by detailed rules of national law (see Article 5(4) draft AIA). Hence, it is for Member States to decide autonomously whether the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement – within the boundaries set by the draft AIA – may be fully, partially or not at all eligible for authorisation.

Since the listed exceptions to the prohibited use in question are exhaustive, law enforcement authorities cannot bypass the prohibition by turning to the grounds for processing biometric data provided under Article 10 LED. Therefore, the specific legal framework under the draft AIA regarding police use of ‘real-time’ remote biometric identification systems in the public space can be considered a lex specialis in respect to Article 10 LED (see Recital 23). Remarkably, the draft AIA does not introduce a similar specific legal framework or prohibition with regard to ‘real-time’ use of such systems for other purposes than law enforcement, nor to any sort of ‘post’ use.

2.2 ‘Post’ remote biometric identification systems

‘Post’ remote biometric identification systems are simply defined in Article 3(38) of the draft AIA as any remote biometric identification system, other than those operating ‘real-time’. This definition is further clarified in Recital 8 of the draft. Herein, it is set out that the identification performed by ‘post’ remote biometric identification systems only occurs after a significant delay, so – in contrast to the use of ‘live’ or ‘near-live’ material – only after the biometric data have already been captured. For example – according to the aforementioned recital – the use of ‘post’ systems could involve “pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned”.

Both ‘real-time’ and ‘post’ remote biometric identification systems are explicitly referred to as ‘high-risk’ under point 1(a) of Annex III of the draft. As opposed to the former type of systems however,  the latter is not subject to a conditional ban when it comes to police use in the public space. Such use of ‘post’ remote biometric identification systems is instead subject to the earlier mentioned conditions under Article 10 LED. Though these requirements are arguably strict, police use of ‘post’ remote biometric identification systems in the publically accessible space is certainly less restricted under the draft AIA than that of the ‘real-time’ variant.

According to Recital 8 of the draft AIA, the distinction between ‘real-time’ and ‘post’ remote biometric identification systems is made “considering their different characteristics and manners in which they are used, as well as the different risks involved”. Whether these differences can explain the different levels of protection provided under the draft AIA is however questionable. This will be discussed in the next section.

  1. The distinction in light of fundamental rights

With its risk-based approach, the draft AIA intends to “tailor the type and content” of its binding rules “to the intensity and scope of the risks that AI systems can generate” (as follows from Recital 14). However, as for the different level of legal restrictions concerning the use of ‘real-time’ versus ‘post’ remote biometric identification systems in the public space for the purpose of law enforcement, it is questionable whether “the intensity and scope of the risks” to fundamental rights can explain the regulatory distinction between the two systems.

In Recital 28, the draft mentions a number of rights protected by the Charter of Fundamental Rights of the EU that may be adversely impacted by high-risk AI systems in particular, including the right to human dignity, respect for private life, data protection, freedom of assembly and more. As said earlier, the draft AIA classifies both ‘real-time’ and ‘post’ remote biometric identification systems as high-risk systems. According to Recital 33, this is because “technical inaccuracies” of such systems in general “can lead to biased results and entail discriminatory effects”.

The more vigorous restrictions with regard to police use of ‘real-time’ systems in the publically accessible space stem from the consideration that such use is “particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life (…) and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights”, and additionally, that “the immediacy of the impact and the limited opportunities for further checks or corrections carry heightened risks of the rights and freedoms” (according to Recital 18).

Nevertheless, it remains unclear as to why the mentioned rights and freedoms of the data subject are not equally at stake in case of ‘post’ use of remote biometric identification systems, since the only functional difference between the two systems regards the existence of a ‘significant delay’ between data capture and identification. This point is also reflected in responses to the draft AIA by experts. In a joint opinion of May 2021 on the draft AIA, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) note that ‘post’ use of remote biometric identification systems can be just as intrusive as that of ‘real-time’ use, since the intrusiveness does not necessarily depend on the time span within which the biometric data is processed. They take into account that mass identification systems can identify thousands of individuals within a few hours and argue that, in the context of a political protest, the use of ‘post’ systems will probably have a significant chilling effect on the exercise of fundamental rights and freedoms.

Similar critique has been put forward by European Digital Rights (EDRi). According to an analysis of the draft AIA by EDRi from August 2021, the distinction made between ‘real-time’ and ‘post’ use would create “a loophole which permits law enforcement agencies to retrospectively apply biometric identification to CCTV footage or photographs”, enabling a type of mass surveillance which can have an equally intrusive impact on fundamental rights –  more intrusive even when data would be processed from sources that cover different times and places. Whereas the draft AIA requires temporal and geographic limitations to be set before ‘real-time’ use for law enforcement purposes in the public space can be authorised, such limitations are not imposed under Article 10 LED which encompasses the basic legal framework for ‘post’ use in the context of law enforcement.

Finally, on the basis of case law by the Court of Justice of the EU, one could argue in different directions on whether or not the distinction can be explained on the basis of different risks to fundamental rights. On the one hand, the Court has similarly distinguished real-time access to sensitive data from non-real-time access, by stating in La Quadrature du Net (par. 185) that the first is more intrusive as it may allow for virtually total monitoring of the data subject. On the other hand, one could argue that the distinction is artificial, because the use of ‘post’ remote biometric identification systems may also provide for (continuous) monitoring of the data subject’s private life, albeit not live but with a delay. On this note, the Court has established in Tele2 Sverige and Watson that unforeseeable processing of sensitive data that was captured in the public sphere constitutes a serious interference of the fundamental rights to privacy and data protection, regardless of when that data was captured.

Conclusion

As part of its legislative proposal for the Artificial Intelligence Act, the European Commission has sought to strengthen the EU’s legal framework on the use of remote biometric identification systems. In this light, the draft AIA lays down rules for ‘real-time’ and ‘post’ remote biometric identification systems, which it classifies as high-risk AI systems. On top of these rules, it introduces a specific prohibition (subject to a number of exceptions) on the ‘real-time’ type of system, when used for the purpose of law enforcement in the public space.

This way, a regulatory distinction is made between ‘real-time’ and ‘post’ use of remote biometric identification systems – such as video surveillance systems equipped with facial recognition technology – by the police in the public sphere. It is highly questionable whether this distinction can be explained on the basis of a difference in risks to fundamental rights, even though the AIA claims to have tailored its rules to such risks. On the one hand, the distinction may be explained on the basis of a different level of intrusiveness, but on the other, this argument can be refuted by pointing to the specific risks related to ‘post’ systems.

As this contribution is written, the ball in the legislative process lies with the European Parliament, whose members have submitted amendments to the draft AIA in June 2022. From the thousands of amendments submitted, it is yet impossible to tell how these will impact the rules regarding remote biometric identification systems. Nevertheless, imagining that the European Parliament and – later in the process – national governments would come down in favour of a world with biometric (mass) surveillance in the public space, let’s hope that this new reality would not just protect fundamental rights in ‘real-time’, but at any time.

***

Citation: Anna Schröder, ‘Real-time’ versus ‘post’ remote biometric identification
systems under the AI Act, ALTI Forum, October 14, 2022

Photo by Arthur Mazi

Subscribe to our newsletter