This article belongs to the debate » The EU AI Act’s Impact on Security Law
12 December 2024

EU’s AI Act and Migration Control. Shortcomings in Safeguarding Fundamental Rights

Introduction

In Europe, technology is increasingly being used for the purpose of border and migration control. EU law, including the new Migration and Asylum Pact, and extensive EU funding of border technologies encourage Member States to distinguish as early as possible between so-called “bona fide” or “mala fide” migrants. The current trend of techno-solutionism affects non-EU citizens or third-country nationals in particular and implies risks for their right to privacy, data protection, and access to legal remedies. Furthermore, the rise of digital borders raises concerns with regard to the right to non-discrimination and the prohibition of racial profiling. In 2021, the European Parliament Research Center (EPRS) identified different applications of algorithms, including automated risk assessment and decision-making, as well as surveillance via drones or (possibly in the future) lie detectors. Assessing the impact of the new AI Regulation (hereafter AI Act) within this field, it is necessary to differentiate between rule- and data-based as well as machine-learning algorithms. Only the latter falls within the scope of the definition of Article 3 AI Act which implies a system which can operate with a certain level of autonomy on the basis of the input it receives. In this contribution, I will argue that the new AI Act, while adding some safeguards, falls short of protecting fundamental rights in the border and migration context sufficiently because of its blanket exceptions and non-included high-risk-classifications of certain migratory AI systems.

Exception for large-scale migration databases and general obligations

For large-scale IT systems involving the use of AI, the requirements of the AI Act will only start to apply by December 31, 2030 (Article 111 AI Act). According to Annex X of the AI Act this includes the operational Schengen Information System, Eurodac and Visa Information System and in addition, ETIAS, Entry Exit System (EES) and the ECRIS-TCN for which legislation has been passed. This means that AI used as part of these systems does not have to be compliant with the AI Act for a very long time. Except for the specific rules in the AI Act that already apply to other AI tools within the border and migration context, there are general obligations which must be observed also with regard to the aforementioned migration databases. First, it is important that in recital 60 of the AI Act, the EU legislator acknowledges that these measures affect persons who are often in particularly vulnerable positions and who are dependent on the outcome of the actions of the competent public authorities. Second, according to the same recital, the use of AI systems in the field of migration, border, and asylum management should “in no circumstances, be used by Member States or EU institutions, bodies, offices or agencies to circumvent their international obligations” under the UN Refugee Convention, including the 1967 Protocol. Nor should these systems be used in violation of the non-refoulement principle or “to deny safe and effective legal avenues into the territory of the Union, including the right to international protection”. Third, the AI Act only provides additional protection to the fundamental rights of individuals. This means that national and EU authorities always must observe, also within the context of large-scale databases, rights such as data protection, non-discrimination, and effective legal protection as protected in the ECHR, the Charter for Fundamental Rights in the EU (CFR), and the General Data Protection Regulation (GDPR) or the Law Enforcement Directive (LED).

Prohibited systems

The AI Act distinguishes between AI that should be banned altogether (Article 5) and high-risk systems (Article 6). The prohibition of at least four AI systems mentioned in Article 5 may become relevant within the border and migration context. First, this includes the prohibition of AI risk assessment to predict the risk of a natural person committing a criminal offence. This prohibition is relevant within the context of the Schengen Borders Code or Visa Code, allowing the denial of entry or visa to a third-country national if this person is considered to be a threat to public policy or internal security. According to the AI Act, such a decision cannot be based on AI-based pre-risk assessments, unless the person is already under suspicion of committing or having committed criminal acts. A second ban concerns AI systems that create or expand existing facial recognition databases by untargeted scraping of facial images from the internet or CCTV-footage. This prohibition means that existing EU large-scale data-systems on third-country nationals may not be expanded in any way by untargeted scraping. Scraping publicly available data is for example used by the US government to track and eventually deport undocumented migrants. Third, the AI Act prohibits AI systems individually categorising natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition must be taken into account where Member States increasingly rely on the use of technologies to test the reliability and identity of asylum seekers. A fourth ban concerns real-time remote facial recognition in public spaces for the purpose of law enforcement. This ban applies to systems that can remotely identify individuals via cameras using biometric data stored in national or EU systems, except when used for the purpose of targeted detection of victims of kidnapping, human trafficking or sexual exploitation, or for the search for missing persons.

Examples in China and the UK, but even research instigated by Frontex, illustrate that more extended use of remote identification is not entirely dystopian. Whereas with regard to the law enforcement use of EU large-scale data-systems, this ban only becomes applicable by 2031, Member States and EU Agencies are of course, as mentioned, bound by the existing legal framework of human rights. Relevant in this regard, is the judgment Glukhin v. Russia, in which the ECtHR defined on the basis of Article 8 ECHR, restrictions for the use of video surveillance and facial recognition technologies in public areas.

High-risk systems

Particularly relevant for the field of migration and border control is the classification of high-risk systems in Article 6 of the AI Act. If an AI system is classified as high-risk (as outlined in Annex III of the Regulation, which list can be updated by the Commission), it must comply with a multitude of rules dealing with risk management, transparency, human oversight and cybersecurity. These rules include obligations regarding data quality and training (Article 10), notification procedures (Article 30), the registration in (a partially public) EU database (Articles 49 and 71), and, based on a proposal by the European Parliament, a prior fundamental rights impact assessment (Article 27). The output of a high-risk system must always be separately verified and confirmed by at least two natural persons with “the necessary competence, training and authority” before any measures or decisions may be taken (Art. 14 (5)). Furthermore, Article 86 of the AI Regulation provides for an individual right of explanation for the person in respect of whom a decision is made based on the output of a high-risk system. This means that, with regard to decisions producing legal effects or significantly affecting someone’s health, safety or fundamental rights, a person shall have the right to obtain from the deployer “clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken”. The preconditions of “clear and meaningful explanations” are not further defined, which allows much leeway for users. However, this provision must be read within the context of the right to effective judicial protection, safeguarded by Article 47 CFR. In R.N.N.S. and K.A. v Minister van Buitenlandse Zaken (C-225/19 and C-226/19) and ZZ v. SSHD (C-300/11) the CJEU emphasised the obligation of informed decision-making for border and immigration authorities, ensuring individuals access to effective remedies. In Ligue des droits humains (C-817/19), addressing automated risk assessment on the basis of the PNR system, the CJEU found that the individual at stake must be able to understand “how those criteria and those programs work” to allow him or her to decide “with full knowledge of the relevant facts” whether or not to challenge the unlawful and indiscriminatory nature of these criteria (para. 201). In the same judgment, the CJEU prohibited the use of self-learning systems for the determination of risk profiles, underlining the risks of bias and discrimination.

High risk-systems for migration-, asylum- and border management

Annex III of the AI Regulation distinguishes between four types of high-risk systems within the context of migration, asylum and border control management. First, the EU-legislator defined “polygraphs or similar tools” by or on behalf of national administrations or Union institutions, bodies or agencies as a high-risk system. This is despite urgent calls by organisations, including the EDPS and the EDPB, to prohibit the use of lie detectors altogether. Polygraphs have already been tested in EU-funded research projects, including iBorderCtrl and Tresspass. iBorderCtrl involved a pilot in which so-called “pre-travel” registration interviews were conducted by “avatar” border guards using an automated deception detection system. This system could produce a risk score indicating whether the applicant was lying or not, based on an analysis of nonverbal behaviour and fine-grained micro-movements. In Patrick Breyer v. European Research Executive Agency (REA), the CJEU endorsed the justification of limited disclosure of the results of this pilot because of commercial interests of the parties involved in the research consortium. The lack of transparency is problematic, also because of the warnings by experts that lie detectors are based on pseudo-science (here, here, and here).

A second category of high-risk systems concerns AI tools used to assess risks of security, irregular migration, or health posed by a person seeking entry or having entered the territory of a Member State. Automated risk assessment will be used starting in 2025 when the ETIAS-system becomes operational. The ETIAS Regulation requires visa-exempted third-country nationals to apply for travel authorisation for the European Union before their departure. Based on data to be provided by the applicant, profiles are created to be automatically compared with risk indicators identifying potential irregular migration, security, or public health risks. If the comparison produces a hit, the application in question must be manually processed by the national ETIAS unit of the responsible member state. So far this does not involve AI, however a report by euLISA, the EU agency in charge of managing large-scale data systems, suggests the use of AI or machine-learning systems to detect “suspicious requests”.  A third high-risk system concerns AI systems to assist public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regarding the eligibility of individuals applying for a status, including related assessments of the reliability of evidence. This category applies to technologies used by Member States for testing the reliability of asylum claims, such as language recognition systems. It may also anticipate further categorisation of asylum seekers at the external borders, as provided in the Screening Regulation. As a fourth category of high-risk systems in the migration, asylum and border context, Annex III mentions AI tools for the purpose of detecting, recognising, or identifying natural persons, with the exception of travel document verification. At EU level, Automatic Border Control (ABC) is currently used for automated checks at the border by comparing the face of the person requesting entry not only with the photo stored in the identity document but also with stored biometric data in large-scale data systems. This, again, does not yet include AI, but a Frontex report suggests AI “as a capability to detect potential threats, such as face presentation and morphing attacks” (this concerns technologies by which a traveler intentionally attempts to be misidentified or misclassified by the biometric recognition system).

AI tools which are not classified as high risk

It is questionable why some AI tools in the border and migration context are not yet included in Annex III. This includes, for example, the use of AI to monitor, analyse, and forecast migration trends and security threats. Such use has been explored by Frontex  to support coordinated operations at the external borders. Similarly, according to the 2020 Asylum Report, EUAA (formerly EASO) applied “machine learning to analyse big data on conflict and disruptive events in countries of  origin and transit to clarify the root causes of individual displacement events”.  Other AI systems supporting border monitoring or surveillance are being investigated in EU-funded research projects such as Roborder, Promenade and Odysseus. The classification as high-risk system depends, amongst others, on the extent of the adverse effects caused by the AI system on fundamental rights protected by the Charter (recital 48). At first glance, systems identifying trends of migration on the basis of open-source information seem less intrusive of individual rights. However, such use may involve high risks for fundamental rights and freedoms of individuals and groups, including data protection and freedom of expression, as underlined by the EDPS in its report on EASO’s social media monitoring. Also AI tools designated for mobile data extraction as well as dialect and language recognition, are not defined as high-risk AI systems, despite respective privacy concerns and questions about reliability. Finally, another example, not identified in Annex III, concerns the use of AI for the resettlement of refugees. This involves projects investigating whether with data- and AI-based “matching” between host country and refugee, the integration of the latter can be improved. Such matching tools have an impact as well as for fundamental rights, as they may involve biased and stigmatising decision-making (see also Meltem Ineli-Ceger).

Exceptions

The AI Act includes some explicit exceptions with regard to the safeguards for high-risk systems in the migration context. For example, the requirement that human supervision must involve separate verification by at least two natural persons does not apply to AI systems used for law enforcement and migration, border or asylum management, “where Union or national law considers the application of this safeguard to be disproportionate” (Article 14(5)). The AI Act does not identify in which situation this requirement would be disproportionate or which interests must be balanced against each other (recital 73 only refers to the “specificities” of the areas at stake). In addition, high-risk systems within the aforementioned fields do not have to be registered in a publicly accessible EU database (Article 49(4)). They are also exempted from the requirement to publish a brief summary of the AI development project, including objectives and expected results, on the website of the competent authority (Article 59(1j)). These exceptions add to the already existing lack of transparency concerning the use of migration, border, asylum technologies. Furthermore, the AI Act does not include a clear framework to hold to account the different actors involved, which may also hamper the right of redress for individuals.

Conclusion

The implementation of the AI Act poses restrictions on the development and use of AI within the migration, border and asylum context. The prohibition of specific AI systems in Article 5 of the AI Act should guide both IT-developers and EU and national legislators when considering new instruments within this field. Furthermore, the classification of specific tools as high-risk systems means they are subject to stricter rules, including prior fundamental risk assessment, continuous monitoring of their development and use, and an individual right of explanation. However, generally the AI Act fails to protect migrants because of the various exceptions to other safeguards such as human oversight and transparency mechanisms and the fact that AI use in large-scale data systems on migrants will only have to be compliant with the AI Regulation in 2031. Furthermore, the EU legislator has chosen to allow the controversial use of polygraphs as high-risk systems, instead of prohibiting their use. As such, the AI Act still leaves EU Agencies and Member States broad discretion to use and experiment with AI tools when making decisions about migrants, including asylum seekers and refugees.

The protection offered by the AI Regulation, as well as its exceptions, cannot, however, be separated from the general framework of fundamental rights as protected by the CFR and the ECHR, including the right to privacy and data protection (Articles 7 and 8 CFR, Article 8 ECHR) as well as the right to non-discrimination (Article 21 CFR, Article 14 ECHR), and effective judicial protection (Articles 47 CFR, 13 ECHR). Nor does the AI Act set aside the applicability of the GDPR and LED within this field. This legal framework and case-law of both European Courts will remain to be observed when developing and applying algorithmic technologies (whether machine-learning or not) within the field of border, migration, and asylum policies.

This contribution develops further on a blogpost by Jarik ten Cate and Evelien Brouwer published on verblijfblog.nl


SUGGESTED CITATION  Brouwer, Evelien: EU’s AI Act and Migration Control. Shortcomings in Safeguarding Fundamental Rights, VerfBlog, 2024/12/12, https://healthyhabit.life/eus-ai-act-and-migration-control-shortcomings-in-safeguarding-fundamental-rights/, DOI: 10.59704/a4de76df20e0de5a.

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.




Explore posts related to this:
AI Act, AI Regulation, EU, Europarecht, European Union