Facial recognition software has become a common part of American life. It’s used by government employment agencies to verify an applicant’s identity, by landlords to monitor tenants, and by police in their investigations, which has resulted in some wrongful arrests. Indeed, studies show that facial recognition algorithms are often inaccurate when it comes to identifying women and people with dark skin tones. Privacy advocates concerned by how law enforcement has used surveillance technology cheered Amazon’s recent decision to extend a moratorium on police use of its facial recognition software, though Amazon gave no reason why it was doing so. We’ll talk to Bay Area experts about how facial recognition technology is being used, why it needs to be closely monitored, and what cities, states and the federal government are doing — or not doing — to regulate its use.
Guests:
Matt Cagle, technology and civil rights attorney with the ACLU
Brian Hofer,, chair and executive director, Secure Justice
Daniel E. Ho, Scott Professor of Law, Stanford University and also an Associate Director at Stanford’s Institute for Human-Centered Artificial Intelligence
Tracy Rosenberg, executive director, Media Alliance