On January 27 local time, 40 social organizations sent a joint letter to the Privacy and Civil Liberties Board (PCLOB), which advised the U.S. government, calling on the U.S. government to suspend until “waiting for further review” Apply face recognition technology.
The joint letter was drafted by the privacy protection organization Electronic Privacy Information Center (EPIC), which mentioned a face recognition report recently published by the New York Times: more than 600 US law enforcement agencies are using startups Clearview-AI’s face recognition system, which covers 3 billion image data captured from major websites.
The letter said: “On behalf of the major consumer, privacy and civil liberties organizations, we urge PCLOB to recommend to the President and the Homeland Security Secretary a suspension of the facial recognition system pending further review.”
The 40 social organizations include the American Consumers Union, the Electronic Frontier Foundation, and the Electronic Privacy Information Center.
The letter argues that face recognition technology may not only be unsuitable for people of color, but also be used to “control the minority population and limit dissidents.”
The letter mentioned that the National Institute of Standards and Technology recently researched 189 facial recognition algorithms of 99 developers and found that the software’s false positive rate for Asian and African Americans was 100 times that of whites.
The letter states, “Although we do not believe that improving the accuracy of face recognition can be used as a reason for further deployment (face recognition system), the current system’s apparent bias and discrimination issues are another reason we recommend a comprehensive suspension.”
The MIT Technology Review described the move as one of the best efforts to date to prevent the application of face recognition technology, and the letter indicates that people’s attitudes toward face recognition are becoming increasingly negative.
At present, the discussion on “whether face recognition technology is applied in public places” has begun to change. Initial concerns focused on the accuracy and bias of the technology, but the MIT Technology Review mentioned that lawmakers at the third face recognition hearing in the US Congress in early January 2020 have begun asking: 100% accurate, whether one should use it.
At this stage, there are different attitudes towards the large-scale application of face recognition technology around the world.
In 2019, the cities of San Francisco, Somerville, and Oakland successively announced that they would ban local governments from using face recognition technology. In early 2020, the EU was revealed to be planning to ban face recognition technology in public places within three to five years. In addition, Washington State is considering legislation to regulate private and corporate face recognition applications in public places.
On the other hand, Seoul, South Korea, stated in early January 2020 that it plans to install 3,000 AI-predictable crime cameras in the city to detect potential crimes. The London Police also announced on January 24 that they will deploy face recognition cameras throughout the city.