Projects 2021

Spring 2021
Detection of Over-Rotated Biometric Images and Incorrect Labeling of Fingerprints
M.G. Sarwar Murshed (CU), Keivan Bahmani (CU), Stephanie Schuckers (CU), Faraz Hussain (CU) Deen Dayal Mohan (UB), Nishant Sankaran (UB), Srirangaraj Setlur (UB)
Biometric matching systems that rely on face/fingerprint/iris images are inherently vulnerable to rotations of the source images leading to a reduction in matching performance. In this work, we propose a two-pronged approach to reduce the effect of over-rotations. First, we aim to utilize our human-annotated bounding boxes in our previously acquired dataset and rotate each fingerprint in slaps to simulate the effect of over-rotated fingerprints. We plan to develop and train an instance segmentation model[3] using our new dataset containing both naturally and synthetically over-rotated fingerprints and out-of-order fingerprints to produce a model robust against over-rotation and incorrect labeling of the fingerprint. Secondly, we aim to design self-supervised learning (SSL) approach to train a CNN to detect the rotation angle of biometric features in an image. The approach (similar to contrastive learning[1]) would take an image and its rotated variant as input and use an encoder network to compute their embeddings and use another projector network to predict the rotation applied to the image. Considering landmarks in the biometric image, we can employ additional supervision signals to improve the accuracy of the projector network. Using SSL, we can leverage infinite amounts of training data without the need for large amounts of data annotation to train a robust rotation detection model.

Differential Performance Mitigation in Face Recognition Based on a Novel Skin Reflectance Estimate
Stephanie Schuckers (CU), Mahesh Banavar (CU), Sabastien Marcel (Idiap)
While the use of face recognition (FR) systems is widespread, there is a growing concern that FR models present differential performance between demographic groups. In our earlier work, we proposed the Skin Reflectance Estimate based on Dichromatic Separation (SREDS) measure to accurately represent subjects’ skin tone from a single image with greater robustness to variability in illumination and without a constant background. In this work, we aim to develop a novel bias mitigation model by incorporating SREDS into existing FR technologies such as SensitiveNets [1]. The main advantage of the SREDS measure is that it is robust to illumination change and does not rely on subjective skin tone measures or self-reported ethnicity labels.

Evaluation of the Performance of Multi-Finger Contactless Fingerprint Matching
Jeremy Dawson (WVU)
Several approaches have been explored by our group to overcome interoperability challenges associated with matching contactless fingerprints against a contact gallery, including neural network-based matching methods that remove photometric and elastic distortion, create ‘synthetic’ contact based images, and, most recently, perform deep deblurring of the contactless fingerprint image (fingerphoto). All of these methods to date have focused on improving the identification accuracy using only a single fingerprint. For the work proposed here, we intend to evaluate the performance of contactless fingerprints matched against a contact gallery by considering two or more fingers via fusion. We will evaluate single and multi-finger match performance using COTS and academic deep learning approaches, with performance measured using ROC & DET plots and EER & AUC scores.

Open-Source Face-Aware Capture with Anti-Spoofing for Securing Passport Photo Capture
Masudul Imtiaz (CU), S. Hossain (CU), Keivan Bahmani (CU), Stephanie Schuckers (CU)
Low-quality face images are often provided for passport application or renewal, which could be very difficult to use for later identification. An individual may even submit a spoofed photo. After being captured in a kiosk of the post office or other dedicated facility, a face image could be digitally tampered with or stolen. Thus, the presentation attack presents a facial biometric artifact that subverts the face recognition system from assigning the correct identity to a face and further compounds the recognition procedures’ complexity. Hence, it is necessary to embed "Intelligence" to the "Face Image Capture Hardware" to perform real-time face quality assessment, select the best images, verify the legitimacy of a presented image, detect presentation attack, embed a digital signature at the time of capture, etc. Responding to these needs, we propose to develop an open-source, smart, face-aware capture system with anti-spoofing capabilities. The proposed system will be built on an NVIDIA Jetson Nano, a single-board computer technology having onboard GPU and primarily interfaced with two customized COTS RGB 12MP cameras that have the adjustment capability to capture face images from all age-groups (2yr+). Two deep neural networks will be hosted to the embedded Jetson processor: (1) an Image Quality Classification (IQC) method, employing a modified MobileNet v2 will enable the system to screen the captured images and perform image quality assessment of captured videos (30fps) to ensure the presence of specific parameters such as generic parameters (clarity, contrast, illumination, etc.) and structural parameters (eye position, directedness, etc.); (2) 2D/3D presentation attack detection from the synchronous image capture of the stereo camera. The highest quality images will be assorted and encrypted with digital signatures containing critical metadata such as time/place of capture, localized face information i.e., structure and correlation of the facial components. Along with the complete hardware solution, open-source software will be released so that an individual can self-install the system in his facility upon purchase of a Jetson board and COTS cameras.

Multi-Modal Gait and Anthropometric Data Collection
Karthik Dantu (UB), Srirangaraj Setlur (UB)
There is an increased interest in human activity datasets in real-world scenarios. However, due care is needed in collecting well-annotated data that can be used for training various downstream AI applications. We will collect biometric data including the face, gait and body measurements in both indoor and outdoor settings. We will use multiple sensors including visible and EO/IR captures and collect imagery from roof-mounted, man-portable and drone-mounted cameras. We will develop automated ways of annotating the data including a smartphone app that will help track the subject and instrumentation that will identify gait key points as well as face keypoints for face and whole-body identification tasks. PI Dantu maintains the SMART CoE motion capture lab where such datasets could be captured with mm accuracy ground truth. UB also has an outdoor drone facility for ground-truthed data collection in outdoor settings. SMART CoE also has FLIR IR cameras. PI Dantu’s lab has several UAVs and regularly performs experimental research in UAV planning and coordination.

Biometrics at Scale: Generating Large-Scale Synthetic Fingerprint and Iris Datasets
Arun Ross (MSU), Anil K. Jain (MSU)
The goal of this project is to (a) develop techniques to generate synthetic biometric images, and (b) assemble large-scale synthetic biometric datasets for research and evaluation purposes. While biometrics has made rapid strides over the past two decades, researchers continue to be stymied by the absence of large-scale datasets for algorithm development and performance evaluation. Further, recently implemented privacy laws, including GDPR, has led to restricted access to operational biometric data thereby precluding the use of such data in research projects. To address these limitations, we will develop techniques to generate synthetic fingerprint and iris images using advanced Generative Adversarial Networks (GANs) and anatomical models describing these two biometric cues. In particular, we will take into account the following factors: (a) Realism: the generated images must resemble real biometric data in their anatomical details; (b) Operational Relevance: the generated datasets must vary according to the specified operational conditions including factors such as data quality, sensor used, environmental conditions, age, gender and race of subjects, etc.; (c) Subject Uniqueness: The proposed methods must be able to generate unique identities with different intra-subject and inter-subject characteristics; (d) Demographic Diversity: the synthetic datasets must capture the demographic diversity expected in target applications; (e) Scalability: the proposed methods must be able to generate a large number of distinct identities, in the order of millions, within a reasonable amount of time and computational resources.

Investigate the effort of acoustic coupling on ultrasonic fingerprint imaging
Jun Xia (UB)
Over the past few years, ultrasonic fingerprint sensors are seeing increasing adoption in smartphones. Compared to optical and capacitive fingerprint imaging, ultrasonic sensors are less susceptible to humidity and contamination. As an ultrasound technique, acoustic coupling plays an important role in ultrasonic fingerprint imaging. In this project, we will investigate the effect of acoustic coupling through simulation and experimental studies. We will study how coupling affects the detection of surface and inner layer fingerprints, as well as underlying tissue structures. We will also investigate how a related technique, photoacoustic imaging, is affected by the coupling. This investigation may lead to new algorithm and hardware developments for ultrasonic fingerprint imaging.

Biometrics Image Assurance: From Hardware to Software
Team: Daqing Hou (CU), Wei Yan (CU), Siwei Lyu (UB), Xiaoming Liu (MSU), David Doermann (UB), Srirangaraj Setlur (UB), Nalini Ratha (UB), Arun Ross (MSU)
The goal of this project is (a) to validate the integrity of biometric images, and (b) to determine if bonafide biometric images have been used in the production of AI-inspired synthetic images. The purpose is to impart image provenance characteristics to biometric images . We will explore both hardware-based and software-based schemes. Task 1: We will develop a novel image watermarking solution that will uniquely identify the NAND flash memory where a biometric image originally comes from. Since the proposed hardware-based solution relies on the physical randomness and uniqueness introduced in a flash chip during the manufacturing process, the resulting watermark is expected to be highly unpredictable and unclonable. Task 2: We will develop an active technique to embed invisible traces in digital images/videos that can be later used to detect AI synthesized multimedia created from models trained on such “marked” data (e.g., DeepFakes). These perturbations will be imperceivable to a human observer yet manifest itself in “radioactive” traces that can be detected in the media generated by the resulting defect models. Task 3: We will assess the possibility of generating DeepFakes that can spoof the hardware-based watermarking solution(s) generated in Task 1. This would not only be used to evaluate the efficacy of hardware solutions, but would help in continuous improvement of such solutions. Task 4: We will assess if DeepFake face images can be detected using a fusion of DNN embeddings. If successful in detection, we will explore how to move it closer to the hardware in the biometric system pipeline.

Explainable Mechanisms for Deep Neural Networks: A Biometrics Perspective
Team: Xiaoming Liu (MSU), Nalini Ratha (UB), Vishnu Boddeti (MSU), Arun Ross (MSU)
The goal of this project is to impart transparency to Deep Neural Networks (DNNs) that are increasingly being used in biometric systems. In particular, we will explore methods for enhancing “explainability” which involves the semantic interpretation of a trained DNN as well as the decisions it renders. In the context of biometrics, this would entail, for example, the ability to explain matching and non-matching results, when comparing two biometric samples. We will primarily focus on face recognition, although we will determine the applicability of our techniques to other modalities, viz., fingerprints. In this regard, the following tasks will be undertaken. Task 1: Define explainability and explore explainability at various levels (decision, feature, image, network). Task 2: Develop a Demo Tool showcasing outputs of explainable models. We will experiment with different matchers; different datasets; different explainability models; and different visualization models. Task 3: Determine relevance of these explainable models to other modalities. For example, explore the benefits of these models in the ACE-V process used by fingerprint examiners.

Fully Homomorphic Encryption in Biometrics
Team: Nalini Ratha (UB), Arun Ross (MSU) and Vishnu Boddeti (MSU)
The goal of this project is to start the journey of exploring fully homomorphic encryption (FHE) applications in biometrics. FHE provides quantum secure computations on cloud which can provide privacy protection for biometrics data and inferencing outcome. In this year, we would like to propose three subtasks: (i) Privacy model for biometrics using FHE; (ii) Evaluating existing FHE APIs for basic biometric tasks in terms of speed/security/features; and (iii) Multi-Modal Fusion of Homomorphically Encrypted Biometric Templates for Template level fusion, Score level fusion and Decision level fusion. Subtask 1 will help us review the literature and provide a framework and threat model for the biometrics systems to fully exploit FHE. There are several open sourced FHE SDKs available. We will study their advantages in terms of features, security levels, speed and suitability to basic biometric tasks. Finally, we will explore multi-modal biometrics score and decision level fusion to demonstrate the value of the privacy enhancements through FHE.

Towards the Creation of a Large Dataset of High-Quality Face Morphs
Team: Chen Liu (CU), Stephanie Schuckers (CU), Xin Li (WVU), Jeremy Dawson (WVU), Nasser Nasrabadi (WVU), David Doermann (UB), Srirangaraj Setlur (UB), Siwei Lyu (UB), Xiaoming Liu (MSU), Sebastien Marcel (IDIAP)
The rise of face morphing attacks poses a severe security risk for facial recognition systems. However, in order to develop effective morphed attack detection (MAD) mechanisms, large, high-quality datasets of face morphs, along with the images used to create them, are required. Such resources, however, are lacking for both government agencies and academic researchers working in this field. In an effort to fill in this void, the team will generate high-quality morphs using a two-stage approach. In the first stage, the team will use four primary approaches for initial morphed face generation: landmark-based, GAN-based (including StyleGAN), landmark+GAN-based, and wavelet-based morph fusion. In the second stage, we will elevate the quality of raw morphs. The team will employ a manual approach to remove visible morph artifacts introduced during the generation process. The team will also generate morphed images after rendering, scanning and compressing the original morphed images. We anticipate the dataset generated by this project will greatly benefit and serve as the milestone for developing defense mechanisms against face morphing attacks.

Expansion of Contactless Mobile Phone Fingerprint Datasets
Jeremy Dawson (WVU), Masudul Imtiaz (CU)
The goal of this project is to collect a contactless fingerphoto dataset consisting of images captured using the newest capture devices and without image preprocessing (i.e. raw photos). Data will be collected from a diverse group of 500 individuals. Four-finger slap images will be captured with three different devices: a CrossMatch Guardian livescan system (Appendix F certified), a stand-alone contactless sensor (e.g. Morpho Wave), and a smartphone camera, specifically the Samsung Galaxy S20. Cross-device match performance will be evaluated using the contact-based livescan images as a gallery. For a subset of 50 subjects, we will create spoofs of at least 10 types, totaling 500 spoof instruments. Spoof types selected will be appropriate for non-contact fingerprint systems (e.g., 4-finger photo, fingertip and finger sleeves spoofs, etc). Authentic and spoof match performance will be determined using the livescan images as a gallery.

Fall 2021
A Deep End-to-End Iris Matcher for Simultaneous Segmentation and Matching
Stephanie Schuckers (CU), Soumyabrata Dey (CU)
Iris recognition is a mature field in the biometric sector with real life application in both government and the industry, having multiple available commercial systems. However, commercial systems are black boxes which limits the scope of usability for research. For sustainable research in iris recognition, access to the intermediate solutions (segmented iris, mask, normalized images, feature templates) are vital in carrying out research to further advance state of the art. The available traditional open-source algorithms do provide intermediate solutions; however, they were developed over a decade ago, utilizing outdated techniques and packages, which limits implementation with an expanding horizon of applications. Deep neural network-based solutions have been adapted by most biometric domains as the networks can be customized and trained to provide solutions to specific challenges. Since 2016, researchers have been exploring designs of Deep Neural Networks (DNN) for iris recognition. However, most proposed designs are partial (segmentation or feature extraction or classification) and implementations remain proprietary. To the best of our knowledge from literature survey, there are no available implementations of a complete end-to-end DNN pipeline for iris recognition. We propose to design a novel DNN architecture with the capability of joint learning of iris segmentation and matching in a single framework. Thus, the model would function as an end to end iris recognition pipeline that can produce intermediate results such as the segmented iris, corresponding mask, and model-extracted features as well as the final matching decision. Performance comparisons would be done for different deep-model methodologies (sequential and simultaneous segmentation and classification) as well as with traditional methodologies. Open-source software would be used for traditional algorithms. As part of this evaluation, we would contribute towards sustainable iris biometric research technology by updating an open-source implementation of Daughman’s algorithm.

Face De-Morphing: Finding the Identity of the Accomplice in a Morphed Image
Arun Ross (MSU)
In a morph attack, an adversary presents a digital face image that embodies two distinct identities, thereby allowing two individuals to use the same identification document. Recently, a number of techniques have been proposed to (a) generate high-quality morph images as well as to (b) detect morphed images. In this work, we will aim to develop techniques that can input a morph image and produce the two component face images constituting the morph. We refer to this as de-morphing. In this regard, we will consider two distinct scenarios. In the first scenario, we will assume that a reference face image corresponding to one of the identities is available. This is true, for example, in an access control application, where one of the adversaries will be present at the point of authentication. The goal here would be to use the reference face image of the adversary along with the digital morph image in order to tease out the other face image (i.e., the accomplice) present in the morph. In the second scenario, no reference image will be available. This is the case where, for example, a database of unlabelled morph face images is present. Here, the goal would be to decompose the morph into the two component face images. Experiments will be conducted on existing datasets such as AMSL, MorGAN, EMorGAN, MIPGAN.

Face Recognition at Extreme Pitch and Yaw Angles
Nasser M. Nasrabadi (WVU), Moktari Mostofa (WVU)
With the development of deep learning, Face Recognition (FR) algorithms have achieved a high-level of accuracy on frontal faces under well-constrained environments. However, in real-world surveillance scenarios, the captured face images often contain extreme view-point variations so that FR performance is significantly affected. In this project, we propose to investigate the impact of extreme pitch or pitch and yaw angles together on face frontalization and recognition performance. Therefore, we develop a pre-processing module that can directly synthesize a photorealistic frontal view from any face photo at an extreme pitch or pitch and yaw angles together. Using our pre-processing module, the synthesized frontal face can be used by any off-the-shelf commercial face matcher. To frontalize a high pitch probe face, we propose a multi-task conditional Generative Adversarial Network (cGAN) frontalization algorithm to synthesize a canonical frontal-view from any extreme pitch and yaw angles. Our multi-task framework will simultaneously rotate the face to a frontal-view and give an estimate of the pitch and yaw angles of the input probe face. One advantage of our method is that it does not rely on any 3D knowledge about the face geometry or shape; the frontalization is performed through sheer data-driven deep learning.

LargE scale synthetically Generated fAce datasets (LEGAL)
Sebastien Marcel (IDIAP), Anil Jain (MSU)
Generative models using deep learning are heavily researched nowadays by both Machine
Learning and Computer Vision communities. The generation of synthetic data linked with biometrics activity mostly covers the generation of random faces by either using GANs or VAEs with slight control on some semantic factors. However, the consideration of those synthetic samples as a biometric trait (face identities) is neglected by the scientific community. This proposal is focused on i-) the generation of synthetic biometric face datasets and ii-) the usage of such datasets to reliably train and benchmark face recognition systems. This project will also put in place a collaboration on the topic with Prof. A. Jain (MSU) with possibly a student exchange.

Quality Assessment Metric for Contactless Fingerprints
Jeremy Dawson (WVU), A. Joshi (WVU)
The conventional NFIQ2.0 fingerprint quality index has been designed to estimate the quality of contact-based fingerprints and would not be able to capture the impact of artifacts and issues associated with contactless fingerphotos. In this project, we will 1) investigate the applicability of NFIQ2.0 for fingerphotos and 2) design a quality index measure dedicated for contactless fingerprints. We propose to partition a fingerphoto into four regions (i.e., central part, tip of the finger and the two peripheral sides of the fingerphoto) and then calculate the local NFIG2.0 scores and the number of detected minutiae in each region, and fuse these quality scores to obtain an overall quality metric.

Sequence Checking and Deduplication for Existing Fingerprint Databases
Jeremy Dawson (WVU), Nasser Nasrabadi (WVU)
Large-scale biometric databases often contain errors due to the human nature of their creation. These errors can be compounded by the complexity of the data itself. . The goal of this project is two-fold. We will modify and extend deep-learning-enabled image classification tools developed for a previous CITeR project (Biometric Data Classification for Large-Scale Database Error Detection & Correction, 20F-02W-SP) in order to 1) perform sequence checking of fingerprints in a dataset to ensure that the finger type matches the label (e.g., an image labeled as a ‘right thumb’ is actually a thumb) and 2) to ensure that the fingerprint data record is not duplicated across multiple identities.

Towards Granular Vehicle Engine Abnormality Detection Using Sound Event Detection and Multimodal Feature Fusion
Srirangaraj Setlur (UB), Venu Govindaraju (UB)
We seek to improve engine abnormality detection through the use of sound event detection (SED) and sound separation (SS). SED is the task of recognizing sound events with their respective temporal start and end times. Understanding when and where specific engine abnormalities occur during an engine recording can allow us to create a more fine-grained “automobile fingerprint”, better assessing vehicle value and condition.

Presentation Attack Detection for Noncontact Fingerprint Systems
Soumyabrata Dey (CU), Sandip Purnapatra
Touch-based fingerprint biometrics is one of the most popular biometric modalities and has applications in several fields. However, problems associated with touch-based techniques such as presence of latent fingerprints and hygiene issues due to many people touching the same surface, motivated the community to look for non-contact-based solutions. For the last few years, contactless fingerprint systems are on the rise in demand because of the ability to turn any device with a camera into a fingerprint reader. Yet, before we can fully utilize the benefit of non-contactless methods, the biometric community needs to resolve a few concerns such as the resiliency of the system against presentation attack. One of the major obstacles towards developing a secured contactless fingerprint detection system is the limited publicly available data sets with inadequate spoof and live data. Lack of data and study on this problem can make these systems vulnerable that can lead to a big security breach. The dataset collected from the CITeR funded special project #19F-08C does not have enough live or spoof samples to train a complex deep network to improve the performance of the algorithm. Keeping this in mind, we propose the following research goals for this project:
1. Expanded collection of the dataset, consists of contactless fingerprint images and spoof images using six sensors.
2. A novel study of Presentation Attack Detection (PAD) inspired by deep fake image detection methods. For one such method we will perform discriminative common spoof features learning using spoof and real fingerprint image pairs and a Common Fake Feature Network (CFFN) running a Siamese network architecture. Following the CFFN, a binary classifier network would be used for fake vs. real fingerprint classification.
3. Understanding the importance of different low-level to high-level image features for PAD through statistical analysis.