2018 Projects

Multistage Fusion of Biometric Matchers
Sergey Tulyakov (UB), Srirangaraj Setlur (UB), Venu Govindaraju (UB)

Face Anti-Spoofing: A Comprehensive Evaluation
Shan Jia (WVU), Guodong Guo (WVU

From DNA to Face: Deducing Facial Morphology From Human Genomic Data
Arun Ross (MSU), Jeremy Dawson (WVU), Donald Adjeroh (WVU)

Evaluation of Speaker Recognition Solutions to Guide Prototype Development
Jeremy Dawson (WVU), Nasser Nasrabadi (WVU)

Enabling Secure and Privacy Preserving Authentication via Blockchain/Smart Contract and Biometrics
Yaoqing Liu (CU), Stephanie Schuckers (CU), Saiph Savage (WVU), Jose Alberto Garcia (WVU)

Development and Validation of Radar Based Biometric Recognition (Gateway)
Luke Rumbaugh, Stephanie Schuckers (Clarkson)

Cross-Device Forensic Speaker Verification Using Coupled Deep Neural Networks
Nasser Nasrabadi, Jeremy Dawson, Sina Torfi (WVU)

Deep Fingerprint Matching from Non-Contact to 2D Legacy Rolled Fingerprints
Jeremy Dawson, Nasser Nasrabadi (WVU)

Deep Hashing for Secure Multimodal Biometrics
Matthew Valenti, Nasser Nasrabadi, Veeru Talreja (WVU)

Developing an Automated Method to Remove Labeling Noise in Very Large Scale Dataset
Guodong Guo, Xin Li (WVU)

Incorporating Biological Models for Iris Authentication in Mobile Environments (Phase II)           
Thirimachos Bourlai, Antwan Clark (WVU)

Learning high entropy robust features for privacy preserving facial templates
Sergey Tulyakov, Srirangaraj Setlur, Venu Govindaraju (UB)

Light-weight Machine Learning for Biometric Tasks on IoT Devices
Chen Liu, Stephanie Schuckers (CU)

“Liveness Detection”: Photoacoustic Imaging of Mechanically Accurate Test Phantom Finger
Kwang Oh, Jun XIa (UB)

A Practical Evaluation of Free-text Keystroke Dynamics
Daqing Hou, Stephanie Schuckers (CU)

Research Summaries:

Multistage Fusion of Biometric Matchers

Sergey Tulyakov (UB), Srirangaraj Setlur (UB), Venu Govindaraju (UB)

Biometric systems typically include multiple sensors and comparison algorithms, which have different matching performance and running time characteristics. Most of existing classifier fusion research focuses on maximizing the matching performance, and the time cost is disregarded. Multistage fusion algorithms save time by running matching algorithms in a sequence, and making decisions (accept, reject or go to next stage) at each stage – so we propose to investigate the construction of network based multistage fusion algorithms minimizing total system cost including the running time.

Face Anti-Spoofing: A Comprehensive Evaluation

Shan Jia (WVU), Guodong Guo (WVU)

Face anti-spoofing is to distinguish human live face samples from spoof artifacts. It has become more and more important to integrate anti-spoofing in developing real world face recognition systems. In recent years, different face anti-spoofing methods have been proposed to address different types of spoofing attacks, including photo print/electronic display, video replay and 3D mask attacks. However, it is difficult to tell which methods are more effective for which attack, since there is no systematic evaluation or benchmark of the state-of-the-art methods on a common ground (i.e., using the same databases and protocols). Therefore, there is a critical need to evaluate the face anti-spoofing methods, for both academic researchers and industrial developers, in order to have a better understanding of the existing techniques and inspire new research works. In this project, we propose to quantitatively compare and evaluate the existing face anti-spoofing methods and techniques, using the same datasets and protocols. The evaluations will include the more recent and realistic conditions – mobile scenarios, and 3D mask attacks, in addition to the traditional photo print/display and video replay attacks. The objective is to explore the most robust and efficient methods for each specific presentation attack. The outcome is a quantitative measure of the performance of various methods for each spoofing attack, which can help the affiliates select the appropriate methods for building their anti-spoofing systems

From DNA to Face: Deducing Facial Morphology From Human Genomic Data

Arun Ross (MSU), Jeremy Dawson (WVU), Donald Adjeroh (WVU)

Recent research has explored the potential of deducing the facial morphology of an individual from a DNA sequence of that individual. However, research on this topic is still very much in its infancy, and existing work has not adequately demonstrated that the face image deduced from genomic data can be successfully used for automated face recognition. In this proposal, we seek to extend existing research in the following three ways. First, we will consolidate the genetic markers identified by previous researchers and determine how these can be used to generate a canonical face image of an individual. We use the term “canonical” to indicate that the focus will be on developing an intermediate face representation scheme that is conducive for biometric matching purposes. Second, we will devise ways in which this canonical face image can be successfully used by a face recognition system to match against a digital 2D/3D photograph of the individual. Third, we will develop a novel approach to address a key challenge in this area – that of limited datasets containing both face and DNA data.

Evaluation of Speaker Recognition Solutions to Guide Prototype Development

Jeremy Dawson (WVU), Nasser Nasrabadi (WVU)

Currently, voice biometric modality matching is not a DoD capability. Today’s voice files require heavy manual, time intensive, and specialized expertise to prove that two separate voice samples belong to the same individual. DoD is currently exploring the development of a Voice Recognition Prototype (VRP) system that will deliver a voice recognition system leveraging state of the art commercial voice recognition biometric technology. The VRP project will reduce risk and inform requirements for the receipt, processing, matching, storage, and management of voice biometric modality data, for which capabilities currently do not exist in the system. The goal of the project proposed here is to evaluate current COTS speaker recognition solutions in order to assist in developing requirements for the VRP.

Enabling Secure and Privacy Preserving Authentication via Blockchain/Smart Contract and Biometrics

Yaoqing Liu (CU), Stephanie Schuckers (CU), Saiph Savage (WVU), Jose Alberto Garcia (WVU)

Biometrics has been used more and more heavily for identity authentication; however, an individual has no control over how her biometrics will be used for what purpose. In this project, we propose to combine three authentication methods together, namely, something a person knows, e.g., a passphrase; something a person has, e.g., a driver license or a mobile interface; and something a person is, e.g., fingerprints, to enable highly secure but privacy preserving authentication via blockchain/smart contract technology. The basic ideas are to leverage blockchain to store an individual’s attestation and the hash value of her biometrics (not the biometric data itself), to use smart contract for controlling the access to the information on the blockchain, and to utilize a mobile interface for providing the original biometric data and granting permissions. We will explore an architecture where biometric data is not stored on the blockchain.

Development and Validation of Radar Based Biometric Recognition (Gateway)

Luke Rumbaugh, Stephanie Schuckers (Clarkson)

Continuous authentication may provide additional security beyond initial biometric
authentication with classic biometric (e.g. fingerprint, face, iris). This project focuses on radar
system for biometric recognition for a variety of use cases and the associated underlying biometric
characteristics (hand, face, wrist, head, heartbeat). This proposal is an exploratory study combined
with experimental analysis of the most promising use cases.

Cross-Device Forensic Speaker Verification Using Coupled Deep Neural Networks

Nasser Nasrabadi, Jeremy Dawson, Sina Torfi (WVU)

Forensic speaker recognition (FSR) is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This process involves the comparison of recordings of an unknown voice (questioned recording) with one or more recordings of a known voice (voice of the suspected speaker). Forensic speaker verification techniques have shown very good performance in discriminating between voices of different speakers under controlled recording conditions. However, an issue that arises for FSR is the varying, often poor, transmission channel conditions under which investigative recordings are made (e.g., anonymous calls, wiretapping, microphone, cell telephone). These conditions often cannot be controlled, and pose a challenge to automatic speaker verification. In this project, we propose to develop a deep neural network (DNN)-based FSR that can identify a speaker across different devices with varying levels of channel noise and distortion. Our approach is based on a coupled convolutional neural network (CpCNN) consisting of two identical CNNs that learn to remove the channel differences (bandwidth, sampling rate and additive noise) between two recording devices for the same speaker, and, at the same time, enhance the differences between two different speakers. The input to our CpCNN is the spectrogram, or the Mel Frequency Cepstral Coefficients (MFCC), with ∆ and ∆2 , as well as phonetic features (formants F1 to F4 and pitch). The CpCNN is designed to learn to map the same speaker speech signals (genuine pair) from different input devices into a common latent feature subspace, and simultaneously force the speech signals from different speakers (imposter pairs) to lie in different distance subspaces. There are two major technical contributions in this project: 1) No one has used CpCNN for FSR, 2) Our proposed CpCNN can processes both spectral signatures (STFT, MFCC) as well as phonetic features (formants F1-F4) as inputs.

Deep Fingerprint Matching from Non-Contact to 2D Legacy Rolled Fingerprints

Jeremy Dawson, Nasser Nasrabadi (WVU)

Traditional fingerprint images are acquired by pressing or rolling a finger against a platen such as index card, silicon or polymer. However, these contact-based applications often result in low-quality prints in operational scenarios, mainly due to the uncontrollability and nonuniformity of finger pressure, as well as from residues from the previous fingerprint. Non-contact (NC) fingerprinting devices have recently been introduced to eliminate these drawbacks. Despite their advantages, NC fingerprint technologies face several interoperability challenges when performing direct matching of contactless fingerprint images against legacy databases of contact-based fingerprints, including edge detection, lack of deformation, and 3D image unwrapping. Most approaches to improving interoperability involve creating a 2D equivalent image from a binarized fingerprint photo, structured light image, or 3D point cloud. The goal of this project is to develop a Deep Neural Network (DNN) algorithm to create a canonical representation of the fingerprint image that both accounts for contact-based distortion in 2D fingerprint images and accurately unwraps NC fingerprint images. Our algorithm is based on a novel Deep Convolutional Generative Adversarial Network (DCGAN) that is trained to generate the 2D rolled equivalent fingerprint from a NC fingerprint and preserve the geodesic distance between the points. There are three major technical contributions in this project: 1) a DCGAN-based method is designed to capture the non-linear mapping process from a NC fingerprint to the 2D rolled equivalent, 2) geodesic or Euclidean distances between the NC fingerprint are preserved via applying an appropriate cost-function on the 2D fingerprints, 3) a data driven non-linear mapping (unwrapping) is developed without any parametric model assumption.

Deep Hashing for Secure Multimodal Biometrics

Matthew Valenti, Nasser Nasrabadi, Veeru Talreja (WVU)

Modern biometric systems require secure templates, which represent suitable encrypted biometric features that can be stored in a central database, smartphone, or smartcard. State-of-the-art biometric template protection algorithms can be broadly categorized as following either 1) a biometric cryptosystem approach (encrypted template) [1] or 2) transformation-based approach (cancelable templates) [2]. In this project, we propose a hybrid biometric protection scheme (two-level protection) that combines both approaches. The proposed multimodal secure template consists of a novel multimodal cancelable binary template generated by a multimodal deep neural network (DNN) architecture and a fuzzy commitment scheme based on error-control coding. In cancelable biometrics, the original biometric template is transformed by using a noninvertible mapping applied either in the original domain or in the feature domain. To generate our cancelable template, there are three major technical contributions 1) a DNN architecture is used to generate a robust multimodal cancelable template from a plurality of modalities (e.g., face and iris or fingerprint), 2) an additional novel binary hashing layer is added to the DNN architecture to optimize its training for binary codewords, which are required by the fuzzy-commitment scheme, and 3) the combination of fuzzy commitment cryptosystem with our multimodal binary cancelable template. Due to the DNN architecture and the dedicated hashing layer, our binary cancelable template is truly a joint multimodal feature representation of all the modalities, which is expected to be robust to signal distortions (i.e., scale, illumination, pose, SNR quality).

Developing an Automated Method to Remove Labeling Noise in Very Large Scale Dataset

Guodong Guo, Xin Li (WVU)

This project pairs computer & behavioral scientists to develop a database of spontaneous facial expressions, including microexpressions, and physiological measures (e.g., brainwaves, pulse, muscle activities) that will: 1) confirm the presence of the emotional states; 2) Develop further algorithms to automatically detect the movements; and 3) Correlate these with brain functions and physiological responses. These data will facilitate further development of computer vision tools and theory development, as well as provide materials for other researchers to work with to enhance the development of their tools.

Incorporating Biological Models for Iris Authentication in Mobile Environments (Phase II)           

Thirimachos Bourlai, Antwan Clark (WVU)

This work involves an advanced study on the mitigation of the biological effects of pupil dilation as it pertains to iris mobile identification. Recent investigations have shown that changes in pupil size have a negative effect on iris recognition performance [3]. Furthermore, working in mobile environments, to capture constrained and unconstrained iris videos, pose additional challenges where examples include limitations in terms of the mobile platform used and computational complexity. In Phase 1, we engineered a novel pupil dilation normalization scheme that considers the biomechanical properties of the iris while being independent of iris material properties. Additionally, the WVU Mobile Pupil Light Reflex (WVU Mobile-PLR) dataset was constructed, consisting of visible band PLR videos from various mobile devices. In Phase 2, we propose a continuation of this work, where our focus is two-fold. First, we will advance our original approach (Phase 1) that mitigates the effects of dilation in iris mobile matching and will continue testing our new biomechanical normalization technique. Next, we will focus on holistically characterizing the effects of pupil dilation on mobile iris biometrics, connecting the biological phenomenon from template, state, and dilation perspectives [4, 5]. The results of this work can provide comprehensive performance analyses across multiple technical frameworks, while engineering comprehensive processes to mitigate these effects. Please see experimental plan below.

Learning high entropy robust features for privacy preserving facial templates

Sergey Tulyakov, Srirangaraj Setlur, Venu Govindaraju (UB)

We seek to extend our work on creating privacy preserving templates for face recognition using deep
convolutional neural networks (CNNs). Our prior work (Pandey et al., 2016) presented the algorithm for learning deep CNNs accepting face images as input, and producing as outputs binary vectors with following desirable properties: robustness to intra-class variations, maximum Hamming distance separation for enrolled persons, and maximum entropy of each individual feature. Hashing of such binary vectors produces well performing privacy preserving facial templates. But this algorithm has
one drawback – for enrollment, the user’s face images have to be used during CNN training along with the face images of other persons, and this drawback makes the use of the algorithm difficult in typical applications requiring fast enrollment of a single person (e.g. when single person is enrolled in smart phone biometric vault, or when a single person is added to the database of existing persons with already hashed templates). In the current work, we propose to address this drawback by exploring possible modifications to the algorithm, including the changes to the network training and its optimization criterion, introduction of the postprocessing of CNN generated features, and exploring the ways to keep a small set of representative templates to perform fast re-training of last layers of deep CNN.

Light-weight Machine Learning for Biometric Tasks on IoT Devices

Chen Liu, Stephanie Schuckers (CU)

Machine learning methods have been widely employed in many biometrics applications. One example is facial recognition and standoff sensing for video surveillance and analytics. The new trend is to bring certain level of biometrics identification capability towards the IoT devices themselves, to reduce the burden on the backend computing and network infrastructure. A significant level of computation capability, however, is normally required for many of the machine learning and neural network algorithms. Most IoT devices, on the other hand, are of low power with limited computation capability. Porting the complex deep learning algorithms such as Deep Neural Network (DNN) onto IoT device is very difficult, if not infeasible at all. In this project, we propose to develop Crowd Analysis Engine (CAE) employing light-weight convolutional neural networks (CNNs) targeted towards IoT devices as sensor/analytical platforms for identification and authentication. Firstly, one challenge in such applications is the poor quality of faces. To address this issue, we will implement two light-weight CNNs in series: one 1st stage CNN for face image quality enhancement and one 2nd stage CNN for face feature vector extraction. Second, we will use CNN model compression techniques like parameter number reduction, reduced precision weights to make CNN models even lighter. Lastly, considering many IoT devices using ARM processor, we will use ARM Computing Library (ACL) such as low-level vectorization for further optimization. We will perform comprehensive analysis on the performance and accuracy trade-off and find the optimal balance point of the light-weight machine learning based implementation on IoT devices, comparing with implementing full sized and complex deep learning models. We anticipate this project will bring us one step closer towards Smart IoT devices for biometrics application, and making application scenarios such as crowd face recognition having low processing latency, low data traffic, fast response and meeting real-time processing demand

“Liveness Detection”: Photoacoustic Imaging of Mechanically Accurate Test Phantom Finger

Kwang Oh, Jun XIa (UB)

Deep-tissue photoacoustic imaging typical relies on high power, bulky light sources emitting near infrared light. Signal contrasts mainly comes from hemoglobin molecules located in major blood vessels. For imaging of superficial structures, such as fingerprints, the required excitation wavelength will be significantly different from those used in deep-tissue imaging applications. The shallow depth will also considerably reduce requirements on laser output power, so that low power, compact light sources can be utilized for signal excitation. Our proposal aims to develop a compact, low cost photoacoustic device for imaging of fingerprints. If successfully, this new technique will open up avenues for better liveness detection and biometric measurements.

A Practical Evaluation of Free-text Keystroke Dynamics

Daqing Hou, Stephanie Schuckers (CU) Free text keystroke dynamics is a behavioral biometric that has the strong potential to offer unobtrusive and continual user authentication. Unfortunately, due to the availability of only limited data until recently, this modality has not been tested adequately for practical deployment. Using a novel large dataset of free text keystrokes from our ongoing data collection, we propose to evaluate the practicality of keystroke dynamics while respecting the temporal order of data. Specifically, the proposal will study the tradeoff between sizes of user profiles/test samples and authentication performance as well as speed, how often authentication can/should be run, and the ability to detect imposter attacks of varying sizes. In all cases, the emphasis will be on looking for optimal parameter settings that allow for more frequent and faster authentication, while maintaining practically acceptable