Poster
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Valeriia Cherepanova · Micah Goldblum · Harrison Foley · Shiyuan Duan · John P Dickerson · Gavin Taylor · Tom Goldstein
Keywords: [ adversarial attacks ] [ facial recognition ]
Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing facial recognition systems. However, existing methods fail on full-scale systems and commercial APIs. We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.