Criticism Mounts Over Trump’s Immigrant Biometric Data Plan

Facebooktwitterredditpinterestlinkedintumblrmail

Written by Raoul Walawalker, political commentator for the Immigration Advice Service; an organisation of immigration lawyers based in the US, UK and Ireland

Just as the wearing or non-wearing of masks can show how polarised views across the US can be over the coronavirus pandemic, September was a month that also showed a sharp divergence of opinion over immigration laws and the use of biometrics.

On September 11, the Department of Homeland Security (DHS) presented a proposed regulation for a major expansion in its collection and use of biometric data in the enforcement and administration of immigration laws, even as some states were announcing plans to ban or scale back their use of biometrics following growing concerns over privacy and evidence of racial and other in-built biases.

A draft of the proposal was seen ten days earlier by BuzzFeed News and had already stirred bafflement at the scale of proposed data-gathering. Also noted was the absence of a reasoned attempt to justify placing all immigrants (including minors, millions of legal immigrants and US sponsors) under unprecedented levels of surveillance and proof of identity burdens.

Currently, the US Citizenship and Immigration Services (USCIS) requires fingerprints, a photo and a signature from would-be immigrants over the age of 14. The new regulation would expand the required biometric data to include iris scans, palm and voice prints, photos for facial recognition and of physical or anatomical features (such as scars, skin marks, and tattoos) and DNA. 

Age would no longer be an obstacle to collecting such personal information, meaning data would also be taken from minors, and the proposal would allow the USCIS to seek the data not just from would-be immigrants but also green card holders and their US sponsors. Unsurprisingly, the proposal has provoked criticism on multiple levels.

With the proposal open to comments for one month on the Federal Register, there were nearly 5,000 comments made; many opposing the plan as overreaching, financially unjustifiable and unsafe, on top of numerous objections over its disregard of human rights, with many commentators particularly concerned over the disregard towards children. 

“Under the UN Convention on the Rights of the Child, coercing children to submit biometric data without explicit understanding and consent violates their rights and causes indefensible mental stress,” wrote one commentator.

“These actions put people in the US under undue suspicion, and make immigrants and their US citizen family members vulnerable to surveillance and harassment based on race, immigration status, nationality, and religion,” commented another. Many noted that the number of US sponsors is also likely to decline due to a reluctance to undergo such levels of vetting.

The proposal was labelled a step ‘closer to a dystopian nightmare,’ by Andrea Flores, deputy director of immigration policy for the American Civil Liberties Union, while other commentators expressed concern over the new threshold it crosses in its level of personal data-gathering.

Equally disconcerting is how the proposal is being pushed forward in a way that seems either oblivious to, or simply in disregard of, the mounting concerns over evidence reflecting the system’s susceptibility to biases and inaccuracy, especially in regard to people of colour.

Campaign groups such as Privacy International have presented studies on how some governments are using their expanded data-gathering schemes as a way of increasing pressure on people legally trying to immigrate by boosting their burden of proof requirements, as well as stripping them of agency by allowing their data and their fates to be decided upon by algorithmic systems. 

Significant scales of inaccuracy in algorithmic systems in recognition technology, principally facial, have been underlined in several studies, debunking the myth of technology’s neutrality by revealing it to have acquired tendencies to racial stereotyping and profiling not unlike human prejudices.

A National Institute of Standards and Technology (NIST) study last December 2019, assessing the facial recognition software across the majority of the industry, highlighted the significant discrepancy of identity mismatches of Asian-American and African-American people, often by a factor of 10 to 100 times more than the misidentification of whites. 

Native American groups, and women and children were also all shown to be more likely to be misidentified – African-American women, in particular. The scope for misidentification and bias is such that the city of Portland, Oregon, decided in September to ban its use by both local police and private companies, and more limited bans are now in place in city states elsewhere. 

Given the impact that just one recognition mistake could have on an individual’s life, there are also a number of 2020 bills, such as the National Biometric Information Privacy Act, seeking to restrict the technology nationwide, which should highlight the potential ordeals migrants would face if they were to fall victim to flawed software without having the legal protections granted to citizens.

Added to that, is the matter of data security – any breaches of data safety, not seen as particularly unimaginable, would put millions potentially at the risk of identity theft. This risk has grown due to increasing data sharing practices between government and private tech companies.

Having made immigration a pillar of his 2016 campaign and gaining the reputation as the most anti-migrant president in living memory, with over 400 executive actions, the proposal could be seen as President Trump’s parting bid to dismantle the immigration system, or the dawning of a strange, new era in which the future system will seem like a racially-biased, lucid nightmare for anyone wanting to immigrate to the US or seek asylum there. 

Facebooktwitter

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.