On the list of hot-button AI applications, facial recognition, and surveillance are consistent topics of discussion, even as these technologies are regularly deployed in border zones worldwide. AI technologies deployed in border zones can identify travelers, evaluate their expressions, analyze their fingerprints, and more. These systems reduce the cognitive burden on humans while identifying threats or incursions more effectively and at greater speed. In many cases, national borders are areas with the widest deployment of and the most advanced uses of AI; however, adoption is occurring faster than regulations, legislation, and other legal frameworks can keep up.
In particular, AI applications in border zones have been a hot topic in the European Union. In 2021, the EU released a report that details the AI applications the EU is or is considering using at its borders. The paper makes clear that multiple AI solutions will be utilized at its borders; any failure to do so would put its member nations at a significant loss. After all, the speed and security AI provides will bring efficiency to border zones, which can benefit both travelers and government agencies. However, deploying AI ethically is of significant concern globally.
The most significant ethical questions are: How can government agencies assure the public that data collection is limited to the context of border control? Who has access to the data? Is the information being saved for use later? This lack of clarity is of genuine concern globally, as innovation and adoption continue to outpace regulation. There are no universal standards for testing and validating AI and no established framework for ethically collecting data from these systems. Additionally, bias and discrimination are ongoing issues with AI deployment, which is a particular risk in high-stakes border applications. For instance, a biased model might routinely identify a threat where none exists.
Despite the issues and challenges, AI is and will continue to be a key part of the global border-control strategy. This is why a standardized framework for developing, testing, and validating AI is so critical and why it must be defined and implemented as soon as possible. It is critical that these systems be developed with broader ethics in mind, and such a framework will ensure that these technologies can be embraced fully and confidently.