Face Verification using Domain Adaptation
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2022-01-01
Type of Work
Department
Computer Science and Electrical Engineering
Program
Computer Science
Citation of Original Publication
Rights
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
Abstract
Face verification is a one-to-one matching technique that involves comparing a pair of faces to determine if these faces correspond to a shared identity. Face verification is commonly used in biometric applications, including mobile computing, as well as law enforcement and surveillance applications. However, surveillance imagery is often blurry, whereas mobile phone portraits are often relatively clear. Moreover, commonly employed face datasets often contain imagery of faces in well-lit environments obtained from high definition videos which are unrealistically clear as compared to many use cases in the wild. Variation in image quality and lighting leads to substantial domain shift and performance degradation for deep face verification algorithms. Deep face verification techniques that are trained and tested on clear imagery are often unable to perform well in blurry and poorly lit environments. We present a novel approach using unsupervised domain adaptation in order to construct a model for face verification that is robust to variation in image clarity. Unsupervised domain adaptation techniques allow the model to be trained on labeled imagery from a source domain in combination with unlabeled imagery from the target domain in order to learn domain invariant feature representations. Our methodology incorporates a pre-trained Inception Resnet V1 into a Siamese Domain Invariant Feature Learning (Si-DIFL) meta-architecture. To the best of our knowledge, our proposed approach is the first to combine domain invariant feature learning with siamese networks, and is furthermore the first to employ this approach to the application of face verification. This experiment sheds light on the difficulty of the problem space of blurry face recognition, as well as the brittleness of present-day face-verification systems to variation in image quality. We demonstrate that domain adaptation using Si-DIFL is a viable approach toward improving the robustness and reducing the brittleness of deep face verification algorithms.