AI Act and the Prohibition of Real-Time Biometric Identification
Much ado about nothing?
When it comes to the use of remote biometric identification (RBI) systems, the Artificial Intelligence Act (AIA) draws an important distinction between real-time RBI systems and post RBI systems. RBI systems involve live capturing, comparison, and identification, whereas post RBI systems perform these functions retrospectively using previously collected data, such as recordings from CCTV cameras. The AIA generally prohibits the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes (Article 5(1)(h)), although it allows some notable exceptions. Additionally, post RBI systems are classified as high-risk AI systems under Annex III. This prohibition suggests that they present higher risks for fundamental rights of individuals. Yet, NGOs and academics argue that both real-time RBI and post RBI systems pose similar risks of individuals being subjected to “persistent tracking” (see for example here, here and here). As a matter of fact, one may even argue that post RBI – as it relies on a larger dataset than live recordings and has more time to process data – could be more impactful on fundamental rights and freedoms than real-time RBI. For example, the fear of being identified even months after attending a public event can significantly impact one’s exercise of their right to freedom of expression and of assembly.
In this blogpost, we reflect on the legal framework applicable to real-time RBI systems as established by the AIA. We argue that the AIA failed to adequately address the risks posed by this technology and to provide for sufficient safeguards. Besides, it established vague exceptions to the prohibition of their use, which will lead to various implementation issues.
The lack of harmonisation and clarity in the AIA’s regulation of real-time RBI systems
As mentioned above, real-time RBI systems for law enforcement purposes are generally prohibited under the AIA. However, this prohibition is not absolute. Article 5(1)(h) of the AIA permits the use of this technology if it is strictly necessary for the following purposes: (i) the targeted search of victims of specific crimes (e.g., human trafficking); (ii) the prevention of a threat to life or physical safety of persons, or the threat of a terrorist attack; and (iii) the localisation or identification of a person suspected of having committed a criminal offence when necessary for investigation, prosecution, or for the execution of a penalty, provided that the offense is (a) referred to in Annex II, and (b) is punishable under national law for a maximum period of at least 4 years.
While the purposes (i) and (ii) appear clear and coherent, divergences among Member States may appear in the latter. These divergences essentially relate to the definition of offences. Annex II of the AIA contains an exhaustive list of 16 criminal offences for which real-time RBI systems may be used. Despite the lack of clarity regarding the Commission’s selection of these offences, none of them share a common European definition. The offence of “rape” included in the list is an example thereof. While attempts were made to develop a criminal law definition of rape in the Directive (EU) 2024/1385 on combating violence against women and domestic violence, regrettably, no consensus was found. Thus, divergences exist within the EU (as noted here). For instance, some Member States consider the lack of consent to be a constitutive element of the crime of rape, while others such as France do not. Another example relates to the “crimes within the jurisdiction of the International Criminal Court”. While some Member States included international crimes in their criminal codes (e.g., Spain) or created specific codes (e.g., Germany with the Völkerstrafgesetzbuch), others, such as Italy did not. This means that not only are these international crimes not punishable by a maximum period of at least four years under national law, but that they are not punished at all. This overall fragmentation among Member States regarding criminal offences may ultimately lead to some real-time RBI being allowed in one Member State while being prohibited in others. Additional fragmentation may occur in the assessment of deploying real-time RBI systems. While Article 5(2) AIA notes some elements that shall be considered (such as the nature of the situation and their impact on rights and freedoms), it fails to specify who should consider the elements, how they should be considered, and when. Although one may assume that this responsibility falls on law enforcement authorities (LEAs), this may still entail various actors ranging from prosecutors to police forces, and encompassing private entities entrusted with law enforcement functions. The definition of LEAs in the AIA (Article 3(45)) mirrors its controversial equivalent relating to competent authorities in the LED (Article 3(7)). As a consequence, it may include a variety of actors and further clarity may be needed to delineate precisely which authority this entails. The Court of Justice already excluded tax authorities and road safety directorates from the definition (see Case C-175/20 and Case C-439/19).
A broad and dangerous exception to the prohibition of real-time RBI systems
The exception to the prohibition of real-time RBI systems as regulated in the AIA may not only lead to fragmentation in the implementation of the regulation in the EU but also result in a broad and dangerous expansion of their deployment.
This becomes apparent regarding the personal scope of the exception. As already noted, Article 5(2) AIA outlines certain elements that shall be considered before deploying real-time RBI systems and states that they shall be used solely to confirm the identity of a specifically targeted individual. However, while the identity of the specifically targeted individual might be clear in scenario (i) above (the targeted search of victims of specific crimes), it may be less certain in scenario (ii) (the prevention of a threat to life or physical safety of persons, or the threat of a terrorist attack). In fact, in ticking-bomb scenarios (a term often used to refer to discussions on the ethical justification of torture in terrorist cases), it may be hard to identify a specifically targeted individual in scenario (i). As a result, this may legitimise the use of real-time RBI systems to identify individuals who are not formally the object of a criminal investigation (such as suspects) but who have merely been targeted – a term which is inherently vague and hence subject to interpretation.
The dangerous expansion is also visible when analysing the interaction of the AIA with the LED. In principle, since the use of real-time RBI systems entails the processing of biometric data by LEAs, it should fall under the scope of the LED, and more particularly Article 10 on the processing of special categories of data. As of today, many Member States (such as Italy and the Netherlands) have not yet adopted special national legislation to regulate the processing of this category of sensitive data by LEAs and simply transposed Article 10 LED into national law. The EDPB noted, however, that a mere transposition may not be invoked as a legal basis for the processing of biometric data, as it would lack both precision and foreseeability. It is thus essential to have a specific provision providing for such a legal basis in the national law, and as such, the processing of data in real-time RBI systems would not be allowed.
Recital 38 AIA states, however, that the rules of Article 5(1)(h) on real-time RBI systems apply as lex specialis in respect to the rules contained in Article 10 of the LED. This implies that, theoretically, the processing of biometric data through real-time RBI systems may be allowed under the purposes of the AIA, while not being legally allowed (or regulated) under national law. Consequently, the AIA may circumvent existing restrictions resulting from the insufficient implementation of the LED.
This may be remedied through subsequent national legislation. In fact, Article 5(5) AIA allows Member States to provide via national law the possibility to fully (or partially) authorise the use of real-time RBI systems for law enforcement purposes, or to restrict it even more. By doing so, we argue that Member States would also provide for a legal basis for the processing of biometric data. Furthermore, the AIA mandates that Member States adopt national laws to implement Article 5(3) concerning the authorisation procedure. The legal framework resulting from these paragraphs is quite complex: it is our understanding that while the AIA should theoretically be directly applicable, including the exclusion provided for in Article 5(1)(h), it is only through national law that Member States will be able to benefit from the exclusion of the ban.
Additional but insufficient safeguards through the classification of real-time RBI as high-risk systems
As mentioned above, the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes becomes a non-prohibited practice when it falls under the uses described by Article 5(1)(h) AIA. While not clearly stated within this Article, it may be assumed that these specific uses of real-time RBI systems would then take on the guise of high-risk systems – particularly since Annex III classifies all RBI systems as high-risk systems. If this is the case, it implies that both the specific rules contained in Article 5 AIA and those applicable to high-risk AI systems must be respected. With regards to the latter, Article 26(10) AIA establishes additional specific rules that apply solely to post RBI systems. First, differences may be observed within the specific rules applicable to real-time RBI and post RBI systems. Divergences appear notably regarding the authorisation procedure. Article 5(3) AIA states that a judicial authority or independent administrative authority, whose decision is binding, must issue an authorisation prior to the use of real-time RBI systems. In exceptional cases of urgency, LEAs might use the technology without authorisation, provided that they ask for authorisation within 24 hours. When it comes to post-RBI systems, Article 26(10) AIA establishes that LEAs shall request prior authorisation from the same authorities either before using the system or no later than 48 hours after its use. Thus, no urgency requirement is needed when seeking later authorisation. Additionally, no authorisation is needed when the post RBI system is used for the identification of a potential suspect. Different approaches are also provided regarding notification procedure involving the market surveillance authority (see Article 5(4) AIA for real-time RBI systems vs. Article 26(10) AIA for post RBI systems).
Thus, while more stringent rules are established for deploying real-time RBI systems, these regulations remain largely similar to those applicable to post RBI systems – which the legislator did not intend to prohibit. In addition to these rules, the AIA establishes several general duties for providers and users of high-risk AI systems, including registration in an EU database for high-risk systems (Article 71 AIA). However, for systems used for law enforcement purposes, this registration is limited. Providers and deployers shall register only a limited amount of information in a non-publicly accessible part of the databases (Article 49(4) AIA). Thus, only the Commission and national supervisory authorities may access this restricted section, thereby significantly decreasing the transparency and democratic accountability of the use of these systems. Deployers shall also conduct fundamental rights impact assessment on the use of the systems (Article 27 AIA). However, it has been questioned by civil society organisations whether those will ultimately be meaningful. Criticism was raised for example on the lack of involvement of external stakeholders in the impact assessment process – in contrast with the approach adopted for data protection impact assessment under the General Data Protection Regulation, which allows for broader participation by stakeholders.
Consequently, neither the specific rules nor the rules provided for high-risk systems seem to offer sufficient safeguards, considering the impact they may have on fundamental rights. Among the rights most impacted by the use of these technologies are the right to respect for personal life and data protection, the right to non-discrimination, and the right to freedom of expression (see here and here), but also, more broadly, one could refer to the chilling effects phenomenon.
Conclusion
One of the main aims of the AIA, as publicly declared by the European Parliament, was to ban real-time RBI systems without any exception, yet the Regulation failed to do so effectively. As a matter of fact, due to the exceptions to the prohibition introduced via Article 5(1)(h) AIA, the European legislators ultimately introduced “a framework to allow for the widespread use of such AI systems” (see the ERPS study), including post RBI systems, which may be as – if not more – invasive to fundamental rights than real-time RBI systems. This framework may not only bring fragmentation among Member States but may also allow for a dangerous expansion of real-time RBI systems without sufficient safeguards.