Projects 2019

Abnormality Detection From Automobile Audio Using Spectrography
Nishant Sankaran, Deen Mohan, Nagashri Lakshminarayana, Srirangaraj Setlur
ACV Auctions possesses one of the largest and most comprehensive used vehicle condition report
dataset in the world which includes on-board diagnostic (OBDII) information, engine audio recordings, and vehicle condition inspector notes. These form a unique identifiable fingerprint of vehicle condition which may aid in identifying the make and model and other identity characteristics of the vehicle. We propose to design a system that profiles audio signals for detecting and identifying vehicular abnormalities. Through this project we aim to investigate spectrogram-based methods in conjunction with CNNs for automated analysis of audio signals to identify automobile abnormalities within an audio signal, and classification of the abnormality.

Adaptive Template Pooling for Set Based Face Recognition
Deen Mohan, Nishant Sankaran, Sergey Tulyakov, Srirangaraj Setlur
Unconstrained face recognition systems are tasked with matching templates generally composed of multiple images of a subject. This poses a new challenge of determining how to optimally fuse/pool the set of images (face features) into a single template representative of the identity. Previous works have investigated assessing the “quality” of an image based on its features or metadata and using that knowledge to discount or enhance its contribution to the overall template representation. However, computing the importance of an image’s features for the final representation is dependent on the template being matched against. For eg., when matching a probe template with a non-frontal face image one would preferably select a non-frontal face image in the gallery template to perform the best comparison. With this intuition, we propose to investigate how adapting the aggregation weights based on the templates used for matching could result in optimal template representation construction for enhancing matching performance.

Adversarial Learning Based Approach Against Face Morphing Attacks – SP
Chen Liu and Stephanie Schuckers / Zander Blasingame
The rise of face morphing attacks poses a severe security risk for facial recognition systems. In facing this
challenge and defending against it, in this project we propose a two-fold strategy. First, we propose to develop a generative adversarial network (GAN) based approach to create morphed faces using morphing in a latent space then mapped to the image space. The model will feature an encoder and additional cycle loss constraints to ensure that images can be properly reconstructed from the latent space. We propose to create the morphed latent code by using an auxiliary GAN to model the
error region of two identities in the latent space. Sampling from this distribution should yield latent codes that produce maximal confusion. The second part is the detection model. We propose to employ an Adversarial Autoencoder that uses the original face data to learn a compact encoding in the latent space. Then we focus on differentiating the morphed faces from the regular faces using the reconstruction feature scores. We propose to use a multilayer perceptron (MLP) on the feature scores to perform classification.

Age Invariant Face Recognition in Children
Stephanie Schuckers (CU), Xin Li (WVU), Chen Liu (CU) / Keivan Bahmani (CU)
Face recognition across ages in children has applications such as missing children, benefit distribution, and child trafficking. However, due to lack of publicly available datasets and privacy concerns, there is a need for more study of aging in children. Age-Invariant Face Recognition (AIFR) is an active area of research and various generative and discriminative models have been proposed to mitigate the effect of aging in frontal face images in adults [1-3]. However, recent work of Debayan et al. [4] as well as our preliminary results using open source deep models suggests that aging in children even with short age gaps can drastically diminish the accuracy of deep learning based face recognition systems. Aging in children is a combination of change in facial features and physical growth [5]. As a result, we speculate that methods proposed for AIFR in adults require modification and fine tuning in order to be applicable for FR in children

An End-to-End Deep Super-Resolution Face Recognition System in the Wild
N. M. Nasrabadi, M. Mostofa and N. Ferdous (WVU)
State-of-the-art face hallucination (super-resolution) methods leverage deep convolutional neural networks (CNN) to learn a mapping between low resolution (LR) facial images and their corresponding high-resolution (HR) counterparts by exploring local appearance information [1]. However, most of these CNN-based super-resolution methods [2] are often designed for controlled settings and cannot handle varying conditions (i.e., large pose, scale variations and misalignments) in the wild. Furthermore, the state-of-the-art super-resolved images typically suffer from blurriness where the face structure has been distorted and facial details are not fully recovered from their corresponding low-resolution images. In addition, directly using the super-resolved facial images by a face recognition (FR) module does not necessarily result in a high recognition performance, despite a better visual quality since the super-resolution and face recognition modules are not jointly optimized for each other. In this project, we propose a deep Generative Adversarial Network (GAN) super-resolution module that explicitly incorporates the structural information (edges, sharpness, perceptual features) about faces into the super-resolution reconstruction process as well as jointly learning both the super-resolution and face recognition modules together. To incorporate the face structural information, we propose to design a multi-task GAN-based generator that reconstructs a super-resolved face image that preserves the identity, facial attributes and structural (perceptual) details of the face. Furthermore, we jointly train the GAN-based super-resolution module cascaded with the CNN-based FR module to obtain an end-to-end deep super-resolved FR algorithm. We will base our quantifiable assessment of our proposed algorithm on the overall face recognition accuracy and pixel-wise Euclidean distance between the super-resolved LR and ground truth HR faces.

Biometric Aging in Children Phase III
Stephanie Schuckers, Priyanka Das, Laura Holsopple (CU)
Study of biometric recognition in children has applications in areas such as immigration, refugees efforts, and distribution of benefits. We have completed 6 data collections (7th Collection is scheduled in May 2019) towards creating a longitudinal dataset from approximately 239 subjects aged 4 to 14 yrs., over 2.5 yrs. at an interval of 6 months, with the Potsdam, NY Elementary and Middle Schools. Six modalities are being collected: finger, face, iris, foot, ear, voice. We propose to continue with the data collection for 2 additional years to expand the longitudinal dataset of child biometrics of 6 modalities. Additionally, we propose to expand the collection to include multi-modal data from infants aged between 2 months – 3yrs in collaboration with SUNY Potsdam Childcare Center. A pilot study has been completed and informational booth set up to provide information to parents. The datasets would be further processed and analyzed towards multiple goals: earliest age of modality’s viability (within ages study), and variability of modality with age. Additionally, this shareable dataset will enable research of models to account for age-variations and age measurement metrics. Lastly, we propose to investigate the development of a mobile collection cart which would help improve the logistics of collection from children aged 13 and up; as children get older, it will be more difficult to pull them from their day.

Biometric Recognition Technology Under Scrutiny: Public Outreach on Technology Fundamentals – SP
S. Schuckers, L. Holsopple, D Hou, M. Banavar
Biometric recognition technology has had a tremendous explosion in recent years. However, with that broad reach, has come increased scrutiny of the technology and public discussion around its uses, as well as creation of new regulations and policies providing a framework and limits. Underlying this discussion is confusion around how the technology works and how it differs for various application use cases. Misunderstandings around the technology may lead to poor regulations or, in certain cases, bans on the technology. Furthermore, this public discussion is happening as part of a broader conversation around privacy as it relates to collection of data about individuals (location, search queries, web browsing, shopping, social media, etc). The purpose of this project is to develop two to three short educational videos on the fundamentals of biometric technology. The videos will be broadly disseminated, but the intended audience is policy makers, media, and technology decision makers. These videos will NOT provide recommendations for specific policies and laws, but rather provide foundational material on which good policies can be made.

Deep cross-spectral iris matching: high-resolution visible iris against low-resolution NIR iris
J. M. Dawson and N. M. Nasrabadi (WVU)
Current iris recognition systems are based on iris images captured in the near infrared (NIR) band (i.e. 700–900 nm). This is because NIR iris imaging has the ability to reveal the features of the iris not detected under visible illumination. However, in recent years, there has been an interest in using the opportunistic iris images acquired in visible wavelength (VW; 400–700 nm) face images captured using high-resolution cameras. VW cameras provide higher resolution images than NIR at lower costs. Moreover, applications like surveillance at-a-distance and use of mobile devices for biometrics applications are often based on visible-spectrum cameras. In this proposal, we address the problem of cross-spectral matching of a high-resolution VW iris probe against a gallery of low-resolution NIR iris enrollment images. This proposal addresses and the following two major challenges: a) how to match VW iris against NIR iris images; b) how to exploit the high-resolution characteristic of VW iris probe to achieve better matching scores. To solve these challenges, we propose a cross-spectral synthesizer based on a conditional Generative Adversarial Network (c-GAN), which has previously been used for different cross-modality image synthesis (i.e., sketch-face or IR-visible face synthesis). The cGAN synthesizes the equivalent super-resolved high-resolution VW iris image for each low-resolution NIR image in the gallery. Training our proposed c-GAN will replace the low-resolution NIR iris images in the gallery with their corresponding cross-spectrally mapped and super-resolved high-resolution VW iris images. During testing, the iris-code from the high-resolution VW probe is matched against the iris-codes of the super-resolved and spectrally synthesized VW image gallery. We will base our quantifiable assessment on the following criteria: 1) the overall SNR and similarity measure between our synthesized high-resolution VW images from NIR and their corresponding ground truth high-resolution VW images; 2) The overall matching accuracy between the iris-codes of the VW and the synthesized images in the gallery.

Deep Fingerprint Matching from Contactless to Contact Fingerprints for Increased Interoperability
J. Dawson, N.M. Nasrabadi, A. Dabouei (WVU)
Traditional livescan fingerprint images are acquired by pressing or rolling a finger against a surface, or
platen, which can cause varying degrees of elastic deformity based on the collection type (slap vs. roll) or pressure exhibited by the individual providing the print. Contactless fingerprinting devices have recently been introduced to eliminate this and other drawbacks of livescan technologies; however, the lack of elastic distortion and presence of photometric distortion in many contactless fingerprints introduces challenges to the interoperability between contactless and contact-based fingerprints. Most approaches to improving fingerprint interoperability involve (a) creating a 2D equivalent image from a binarized fingerprint photo, structured light image ([1], [2]), or 3D point cloud for contactless
images or (b) removing or reducing the effects of elastic distortion in contact images ([3], [4]). The goal of this project is to develop a Deep Neural Network (DNN) algorithm to create a canonical representation of the fingerprint image that is free from both the elastic-based distortion present in contact fingerprint images and photometric distortion present in contactless images. Our algorithm is based on a novel Deep Convolutional Generative Adversarial Network (DCGAN) that is trained to generate the 2D representation of a contactless print as well as a non-distorted representation of a contact fingerprint. This canonical representation will then be used in matching for increased
interoperability, no matter which type of print is used for probe and gallery. There are three major technical contributions in this project: 1) a DCGAN-based method is designed to capture the non-linear mapping process from a contactless fingerprint to the 2D rolled equivalent, preserving Euclidean distances, (2) rectification of the deformation in contact prints using a deep auto-encoder, and (3) matching is performed in a common image space using the canonical representation. Items (1)-(3) will be done simultaneously without any parametric model assumption.

Deep Profile-to-Frontal Face Verification in the Wild
J. D. Dawson and N. M. Nasrabadi (WVU)
Matching profile-to-frontal face images is a challenging problem in video surveillance applications. The main problem is that the features available in both frontal and profile views vary significantly from each other and are, therefore, difficult to match. The performance of current feature-based face recognition (FR) algorithms [1] or subspace-based algorithms [2] degrade significantly when comparing frontal to profile faces under unconstrained conditions. As a result, when a profile face appears in a surveillance camera, the recognition performance of current commercial FR systems will be degraded. The main goal of this project is to study face verification in the presence of such extreme pose variation (frontal vs profile) in the wild. We propose a novel face verification approach based on a coupled deep neural network (DNN), which assumes that the two poses (profile and frontal faces of a person) can be related to each other by a latent feature embedding. The proposed coupled DNN consists of two generative adversarial networks (GANs) each dedicated to a different pose (profile and frontal). However, they are coupled together to produce a common (shared) latent feature vector that represents the hidden relationship between the two poses of the same subject. Our proposed coupled GAN (CpGAN) face verifier is trained on several publicly available datasets using the well-known contrastive loss function. We will base our quantifiable assessment on the following criteria 1) the overall recognition accuracy of our proposed profile-to-frontal CpGAN face verifier, 2) the effect of the extreme pose angle differences between the profile and frontal views on recognition accuracy, 3) an evaluation of the profile-to-frontal face recognition accuracy under different quality and illumination variations, 4) a comparison of the face recognition accuracy of profile-to-frontal to that of frontal-to-frontal on four different databases

Detecting Morphed faces Using Deep Siamese Network – SP
Nasser M. Nasrabadi and Jeremy Dawson (WVU)
Freely accessible morphing techniques allow any attacker to combine two different images from two subjects to get a single (composite) image that resembles both face images that were used to generate the morphed image. In this project, we investigate the face morphing detection concept. We will review the state-of-art in morphed face detection algorithms for face images stored within electronic machine-readable documents (eMRTD) format (413×531 resolution and ICAO passport photo complaint) and
propose to perform mesoscopic-level analysis by using a novel morph detection technique based on a Siamese deep learning network. Building on our previous research experience on Siamese-network-based identical twin identification [7] and adversarial example generation and detection [8]-[10], we propose to use a Siamese deep network to find a low-dimensional embedding feature vector that
can automatically capture the discrepancy between morphed faces and the original combined faces. We use a contrastive or a triplet loss to train a Siamese network to learn a feature vector to discriminate between the morphed faces and their original face images. We train the Siamese network with genuine pairs (face images of the same ID) and imposter pairs (a morphed photo and its corresponding
non-morphed photo) as input to the network. We will explore the use of original images as well as their hand-crafted steganlysis feature descriptors (e.g., co-occurrence matrices on residual error, frequency analysis, local texture descriptors, and image quality score). We will evaluate the distribution of the morphed and non-morphed face images in the Siamese embedding feature vector space in order to design our classifier to make the final morphed/non-morphed decision. In summary, our detector consists of three modules: an ICAO-compliant pre-processing module, a Siamese-based feature extraction module and a binary classification module.

Estimation of Age of Children Based on Fingerprint Images
Chenchen Liu, Jeremy Dawson, Stephanie Schuckers
Fingerprints have been used for biometric recognition in children for application such as border crossings, health benefits, food distribution, etc. In some cases, there is a need to estimate the age of the child when birthdate may not be available or trusted. This proposal is focused on the development of an algorithm for estimating the age of a child. Algorithm will be based on features such as ridge-to-ridge frequency, as well as deep learning approaches. Data from a related CITeR project has over 14,000 fingerprint images from ~ 200 children where the same children are collected over 2.5 years (6 months between collection).

Evaluation of Match Performance of Livescan vs. Contactless Mobile Phone Fingerprints
J.M. Dawson (WVU)
Contactless fingerprint technologies are seeing increased usage in operational scenarios. In addition to stand-alone, high-throughput contactless devices, smartphone apps that enable uniform capture of high-quality finger photos have been developed. Due to fundamental differences in the capture process, artifacts such as elastic skin distortion and photometric image distortion can lead to interoperability issues between livescan and contactless smart phone images. Interoperability studies performed by NIST and others have not included a significant number of smartphone images. While previous efforts have been performed to address interoperability challenges, no recent study has been performed to assess the baseline performance of matching contactless smartphone fingerprint probe images to a gallery of livescan images. The goal of this project is to determine the match performance differences between fingerprints captured with traditional Appendix F certified livescan devices and contactless smartphone camera-based fingerprint capture devices. Data will be collected from a diverse group of 200 individuals. Four-finger slap images will be captured with three different devices: a CrossMatch Guardian livescan system, a stand-alone contactless sensor (e.g. Morpho Wave), and a smartphone camera. Cross-device match performance will be determined using the livescan images as a gallery. Performance will be evaluated on the whole cohort of individuals in the dataset, as well as within the various strata of the data (age, gender, ethnicity).

Evaluation of the Equitability of Speaker Recognition Algorithms – SP
Jeremy Dawson & Nasser Nasrabadi (WVU)
Speaker recognition is receiving increasing interest for integration in multi-biometric identification systems. Today’s voice files require heavy manual, time intensive, and specialized expertise to prove that two separate voice samples belong to the same individual. It is well known that several factors impact the recognition rate of speaker recognition systems, including sample rate, channel noise level, channel noise type, and length of utterance [1], [2], and significant research has been directed at addressing these issues. However, unlike other biometric modalities, little, if any, work, has been done to evaluate the equitability of speaker recognition algorithms among the human population. The goal of this work is to evaluate speaker recognition tools to determine if they perform differently for different demographic groups, including age, gender, and ethnicity. It is known that Google’s speech
recognition performs worse for women and non-Caucasian people. Therefore, we propose to evaluate algorithms based on speaker differences in pitch frequencies (females have higher fundamental frequency (F0) than male). We will also artificially vary the pitch frequency of male speakers to investigate the effect on performance. Spectral analysis will also be performed to identify specific
speech features (MFCC, prosodic features, shimmer and jitter) that affect the performance of speaker recognition algorithms between different ethnic groups. We will also determine the age-group in children when the speech recognition algorithms start to fail.

Face in Motion for Disambiguating Doppelgangers and Twins – SP
Arun Ross, Michigan State University
We consider the problem of automatically distinguishing between look‐alike faces, i.e., faces that
appear to be very similar, such as those pertaining to twins or doppelgangers. In some scenarios, e.g., surveillance, an observed face image may be associated with the incorrect identity due to extreme similarities in appearance. In such cases, it will be necessary to leverage additional information in order to determine the correct identity of the individual. In this project, we will develop techniques that explicitly use facial motion (i.e., movement) to perform face recognition in such confounding situations. While face recognition from video is an extensively studied topic, the use of facial motion to especially disambiguate between twins and doppelgangers is an understudied problem.

Face Quality Index Assessment for Sensor and Subject-Based Distortions – SP
Nasser M. Nasrabadi and Jeremy Dawson (WVU)
The performance of face recognition systems is tightly coupled with the quality of the captured face images, which can vary significantly between imaging sensors, compression techniques, resolution, video frames, and image acquisition conditions. In this project, we propose to develop a novel Image Quality Assessment (IQA) algorithm for face recognition tasks. The output of our IQA algorithm will be a scalar face quality score which will be related to the matching score of a COTS face matcher (or the average scores of multiple COTS face matchers). The state-of-the-art general-purpose IQA algorithms usually rely on a set of elaborated hand-crafted features that capture the Natural Scene Statistics (NSS) properties of the face or data driven deep features learned by a Deep Neural Network (DNN) architecture. In this project, we will train a Convolutional Neural Network (CNN) (extracting holistic and
patch-wise features) on a large database of distorted face training images. To create a labelled distorted training dataset, each face photo is labelled to the matching score value of a COTS face verifier (or average matching scores of multiple verifiers) whose input is a distorted face photo matched against its corresponding canonical high-quality portrait face (ISO/IEC 19794-5 complaint) in the gallery. This way, the quality score index predicted by the DNN is directly optimized to represent the COTS’s output matching score. The proposed algorithm is similar to the NIST-NFIQ quality concept which is designed to be a predictor of fingerprint true match accuracy. Therefore, during enrollment, if the predicted face photo quality is too low, the system will reject the photo and prompt or initiate collection of a new face. Two major categories of distortions are considered: (1) sensor-related e.g., out of focus, resolution,
compression, illumination amount, non-uniformity, sensor noise; and (2) subject-related e.g., eye openness, scares, tattoos, permanent jewelry, eye glasses, occlusion (hair covering face) or even pose.

Fairness of Face Recognition against Social Covariates
Tiago de Freitas Pereira and Sébastien Marcel (Idiap Research Institute – CH)
Modern face recognition (FR) systems are now based on Deep Convolutional Neural Networks
(DCNNs) and present skewed recognition scores towards covariates of a test population (i.e. biased with respect to age, gender and ethnicity). There is a pressing need for fair FR systems and therefore to develop techniques to reduce biases in FR. This proposal is focused on the investigation of regularization mechanisms to mitigate biases by controlling the parameters of an arbitrary DCNN depending on specific cohorts. The project will benefit from public open datasets containing the covariates of interest.

Faster and more accurate mobile touch dynamics via feature selection and sequence learning
Daqing Hou, Mahesh Banavar, Stephanie Schuckers
Our group has recently made major improvement in free-text keystroke dynamics where EERs of 6.6% and 3.6% are achieved with test samples of 500 and 1000 keystrokes, respectively [2], which outperform the prior best EER of 7.6% with test samples of 1000 keystrokes [1]. However, these are achieved for physical keyboards only. To apply this modality on mobile phones, it is imperative to reduce the test sample sizes to 100~200 keystrokes while maintaining similar EERs. To this end, we propose to maximize the utilization of the information contained in the test samples by conducting better feature selection and learning stronger models. That is, CNN will be used for feature selection, and RNN will be used to learn sequential patterns in data.

Federated Biometrics: Towards a Trustworthy Solution to Data Privacy
Guodong Guo (WVU)
The issue of data privacy has become very crucial, especially for biometric data, such as human face photos and other personally identifiable information (PII). The European privacy law, General Data Protection Regulation or GDPR, has been enforced recently to restrict the data usage from consumers. The Facebook data privacy scandal and others have shown the emergency of protecting data privacy. If the data privacy issue could not be addressed satisfactorily to the public, we might have trouble eventually to use the public data, which is vital for our research and development. So, all biometric researchers and developers should take care of the data privacy seriously. Further, many government agencies and our CITeR affiliates, like FBI, DHS, DoD, etc. have large sets of biometric data of their own, which cannot be shared with other parties because of the privacy concerns. So, it is important and useful to address the data privacy issue in biometrics. One potentially complete solution to protect data privacy in biometrics is to limit data sharing or prohibit moving data out of the local storage, while the data can still be used for training or testing. Thus we propose a novel idea, called Federated Biometrics, which does not allow the data to leave the local storage. It is motivated by the recent development of Federated Learning (FL) [1], using decentralized data to learn models. It is also based on the criteria of “bringing the code to the data, instead of the data to the code”, which may provide a trustworthy solution to address the fundamental problem of privacy, in developing practical biometrics systems, for both training and testing.

FMONET: FAce MOrphing with adversarial NETworks and Challenge -SP
David Doermann (UB AI Institute), Ranga Setlur (CUBS)
The FMONET and Challenge project will develop a next generation of Face Morphing capabilities that will focus on creating higher quality morphs that also thwart existing detection techniques. The project will create data using a variety of generative techniques and will explore extending morphing to involve more than two individuals. The dataset will be released and a challenge organized for detecting image morphs. Additional data will be generated that varies the morph quality that can be used to test existing face verification systems.

How to Assess Face Image Quality with Deep Learning?
Guodong Guo (WVU)
The issue of face image quality variation has been a big challenge in face recognition (FR). Recently, the deep learning (DL) [1, 2] techniques have been used in a wide variety of face recognition problems, since the recognition accuracies are improved greatly by using DL over the traditional FR methods. Further, the deep learning methods have been used in many other computer vision problems with good performance. Thus a question is raised: Can we use the deep learning techniques for face image quality assessment? And how? The deep learning approaches are mainly supervised learning methods, requiring labels for the training examples. However, there is not a common face database with quality score labels for the face images, even though there are many face images available (typically with identity labels only). Further, the deep network structures in deep learning require a large number of training examples to train the deep networks, otherwise it will have the overfitting or under-fitting problem. Thus the lack of a large face dataset with quality labels will impact the practical use of deep learning for face image quality assessment. Some works used deep learning [3, 4] for quality measure or key frame extraction in videos with a limited-sized dataset for training, making it suspicious for general face image quality assessment. Finally, the face image quality assessment usually depends on the face recognition methods. If the deep models trained for FR (using identity labels in training), but are used for feature extraction in face image quality assessment, it will bring some issues, since the deep models are trained with different qualities of face images (with the same identities) already, making it difficult to separate the face images into different qualities. It will also bring confusions to the traditional FR methods, which are often consistent to human perception of face image qualities.

Human Age Estimation using Genomic Data
Don Adjeroh, Jeremy Dawson, Gianfranco Doretto, Nasser Nasrabadi (WVU)
Age estimation is an important problem in various daily activities, from health assessment, to forensic science, to security and identity profiling. The process of aging is complex and affects all biological systems, thus making the problem of age estimation more challenging. There have been recent studies showing the potential to estimate human chronological age based on genomic data. However, this is still an emerging area, requiring novel methods for improved accuracy. At the same time, attention has been generally limited to a few types of genomic attributes, ignoring recent results that have generated massive amounts of genomic data. In this work, we will expand on the related research in this area. We will first perform a detailed analysis to identify the known genomic regions that have been shown associate with human aging. We will expand on these to identify novel aging‐related genomic markers using new computational approaches. Using the identified markers, we will investigate possible deep learning algorithms for improved estimation of human age from genomic datasets.

Identical Twins as a Benchmark for Human Facial Recognition – SP
Jeremy Dawson & Nasser Nasrabadi (WVU)
Identical twin facial images pose a significant challenge to modern facial recognition algorithms. However, since the rate of occurrence of identical twins among the human population is only ~3.3%, they do not pose a significant operational challenge to many facial recognition scenarios. Of greater concern in these applications is the instances of ‘look-alikes,’ or situations in which one human face is highly similar to another despite the individuals sharing no family relationship. These cases are far higher in occurrence than identical twins among the general population, especially when one considers that an automated computer system may ‘see’ similarity in human features in a manner different from
human perception. The goal of the work presented here is to examine the distributions of genuine and imposter match scores of twin facial images to establish a ‘worst case’ performance baseline for the same distributions from non-twin images that may contain look-alikes. For this work, we propose to evaluate twin facial matching performance for these three cases: all-to-all matching, demographic-specific matching, and longitudinal multi-year matching.

Investigating A Hardness-Sensitive Tactile Fingerprint Scanner for Fake-Finger Detection
Wenyao Xu, Ruogang Zhao, Zhanpeng Jin, Srirangaraj (Ranga) Setlur (UB)
Based on the premise of uniqueness & permanence, fingerprint recognition has seen a tremendous application, e.g., financial transactions, international border security and smartphone accessibility. However, the vulnerability of system’s security to presentation attacks (e.g., spoofing using a fake-finger) still poses a significant threat to user privacy. As the manufacturing industry rapidly advances, it is not unimaginable that the existing vision-based solutions will become inadequate due to the machine’s capability to mimic fine level characteristics of the real finger in a fake model. This project pairs computer & biomedical material scientists to replicate the human’s ability of estimating skin hardness integrated with a fingerprint scanner to mitigate an attack using any fake-finger. New technical efforts in the project include: 1) developing iFeel, a hardness-sensitive Tactile Fingerprint sensor that utilizes the contact geometry of fingerprint with natural skin perception to facilitate anti-spoofing against intricate presentation attacks; 2) evaluating the system security against fake-finger from diverse fabrication materials (e.g., Ecoflex, WoodGlue, Gelatin, etc), and 2D/3D printing attack techniques in uncontrolled environments. If successful, our study will disclose a new dimension of fingerprint analysis and may provide valuable insights into harnessing skin hardness as a new factor for authentication.

Leveraging Biometrics and Smart Contracts to Control Access to Internet of Things
Yaoqing Liu/Stephanie Schuckers
Internet of Things (IoT) security breaches have been dominating the spotlights. Many IoT attacks, e.g., Mirai DDoS attacks, have caused considerable loss to innocent individuals and organizations. This work proposes to leverage biometrics and smart contracts to control access to IoT. Specifically, biometrics authentication first needs to be available in a smart device. Second, the IoT owner registers the smart device, IoT devices and gateway to a blockchain service, e.g., Ethereum, through a smart contact, which defines who can access the IoT devices and with a deposit of cryptocurrency. An attacker first needs to communicate with the gateway, which will contact the smart contract for verification of the attacker’s credentials. The smart contract ensures that the deposit of a malicious user will be forfeited to the owner, if the verification fails during the biometric authentication process at the smart device. This solution leverages biometrics to enable high-level IoT security to benefit the IoT owner and meanwhile enforces a high penalty to deter adversarial users.

LivDet 2019: Liveness Detection Competition 2019
Stephanie Schuckers, David Yambay, Thirimachos Bourlai
Biometric recognition systems are vulnerable to artificial presentation attacks, such as molds made of silicone or gelatin for fingerprint and patterned contacts for iris. Liveness Detection, or Presentation Attack Detection (PAD), has been a growing field to combat these types of attacks. LivDet 2017 showcased 9 total competitors for the fingerprint system competition and 3 competitors for iris algorithms. We propose to host LivDet 2019 with a similar public dataset for training algorithms and systems. In addition, we propose to build on the cross-dataset competition and the matching with liveness we began in 2017. Additionally iris will be hosted as an “online” competition to show feasibility. As a possible addition (pending budgetary considerations), we would like to add LivDet-Face including both visible and NIR images of spoof and live faces.

Mitigation of Demographic Variability in Face Recognition using Relative Skin Reflectance
Trained with a Direct Measure of Skin Reflectance – SP
Stephanie Schuckers (CU), Mahesh Banavar (CU), Keivan Bahmani (CU)
Deep learning based Automatic Face Recognition (AFR) systems are increasingly being used in high stake
decision making scenarios. However, additional research is needed to ensure that face recognition operates effectively for people across the full demographic spectrum [1] through techniques which mitigate potential variability in performance across demographics. The focus of this work is to create and utilize a Relative Skin Reflectance (RSR) measure which can be utilized to improve the fairness of the deep learning based AFR models across skin color.

Multimodal Data Collection to Support DoD Testing and Evaluation Projects – SP
Jeremy Dawson (WVU)
CCDC Armaments and other defense agencies are faced with a lack of operational biometric data (in the form of face, fingerprint, and iris images) that can be given to contractors and other partners to test and evaluate emerging biometric matching algorithms, methods, etc. The goal of this project is to collect
multimodal biometric data from portable multi-biometric sensors used in theater to build a dataset that can be provided to DoD partners for research purposes.

Multispectral anti-spoofing and liveness detection based on the front-view camera and the screen of a smartphone
Jun Xia (UB)
A pulse oximetry device measures the oxygen saturation level of the blood. The oximeter involves two wavelengths, one at around 660 nm (red) and the other at around 940 nm (near-infrared). Light detection is achieved by a photodiode. The computed oxygen saturation level and the heart pulse rate enables liveness detection. The device is typically placed at the back of the phone as a standalone module. In this project, we will investigate the feasibility of using the smartphone’s screen as the light illumination source and the front in-display camera as the light detector. Combined with in-display fingerprint sensors, this investigation will offer a new liveness detection mechanism for fingerprint biometrics.

Non-Contact Fingerprint Presentation Attacks and Detection (Gateway Project)
Stephanie Schuckers, Arun Ross
Biometric recognition systems are vulnerable to artificial presentation attacks, such as molds made of silicone or gelatin for fingerprint, patterned contacts for iris, and paper printouts for face. We have collected over 100,000 fingerprint images for the past decade as part of Liveness Detection Competition (LivDet) efforts which are useful for the development of methods to protect against spoof attacks, also called Presentation Attack Detection (PAD). LivDet datasets from our previous research have been
requested over 600 times with requesters from over 40 countries. All previous fingerprint data were contact-based fingerprint scanners (capacitive and optical). We propose to focus on developing spoof attacks that are appropriate for a multi-finger, non-contact fingerprint systems. The attacks will developed and analyzed according to the FIDO PAD Triage which divides attacks according to
elapsed time, expertise, knowledge of TOE, access to the TOE/Window of Opportunity, equipment, and access to biometric characteristics. Attacks will include paper and video display, as well as full fingertip sleeves spoofs made from molds from four fingers. We will also explore full hand spoofs. With the developed spoofs, we will collect a dataset of over 2000 spoof and live images from a mobile, non-contact fingerprint system. The dataset would be useful for training and analysis of PAD systems.

Understanding (and Mitigating) the Public Concerns in Biometric Authentication – SP
Wenyao Xu, Srirangaraj (Ranga) Setlur, Mark Frank
Biometric authentication is a remedy to these vulnerabilities—having a unique physical identifier is
not only more efficient but also more difficult to steal. However, the use of biometric information comes with many concerns regarding consumer privacy, and there are no national regulatory standards to address these concerns. With the rising economic potential and industrial growth of biometric technology, there will likely be debates over the inherent tradeoff between convenience and privacy at the heart of this innovation. As such, this project proposes to conduct a pilot study with a focus group to understand the public concerns in biometric authentication. Through interviews and online survey, top concerns, such as security limitation, privacy, health risk and inclusiveness, will be conveyed and concluded. If the project can be supported continually with additional funds, the team will also develop a set of multimedia based educational materials to clarify and mitigate these concerns through disseminating the developed materials.