2017 Projects

Biometric Aging in Children – Phase II
D. Rissacher, S. Schuckers, L. Holsopple; 17S-03C

Mitigate Compression Artifacts for Face in Video Recognition
Chen Liu (CU), Stephanie Schuckers (CU); 17S-05C

Single-shot contact-less fingerprints using ubiquitous devices
Natasha Kholgade Banerjee (CU), Sean K. Banerjee (CU), Stephanie Schuckers (CU), David Yambay (CU); 17S-07C

Creation and Implantation of a Test Target into a Test Phantom Finger for the Ultrasonic and Photoacoustic Characterization of an Biometric System
Kwang Oh (EE, UB), Jun Xia (BME, UB)

A Dynamic Multi-Camera Topology-Aware Time-Tapestry for Surveillance
N. M. Nasrabadi (WVU), Domenick Poster (WVU)

Document Facial Image Restoration via Deep Learning & Variational Inpainting
Antwan Clark (WVU), Thirimachos Bourlai (WVU)

Exploring and Benchmarking Deep Learning Techniques for Biometric Applications
Guodong Guo (WVU)

Latent Fingerprint Verification Using a Deep Siamese Neural Network
Jeremy Dawson, Nasser Nasrabadi

Restoration of Distorted Fingerprints by Deep Learning
Dawson (WVU) and N. M. Nasrabadi (WVU)

Incorporating Biological Models for Iris Authentication in Mobile Environments
Bourlai and A. Clark (WVU)

Cross-spectral Face Recognition Using Thermal Polarization for Night Surveillance
M. Nasrabadi (WVU) and T. Bourlai (WVU)

Cross-Age Face Recognition in Non-Ideal Images
Guodong Guo (WVU)

Wardrobe Models for Long Term Re-Identification and Appearance Prediction
Prof. Napp, Prof. Sethlur, Prof. Govindaraju (UB)

Liveness Detection Competition 2017
Stephanie Schuckers, David Yambay, Mayank Vatsa, Afzel Noore (CU)

Impact of Cultural Factors in Trusting Biometric Technology
Zhaleh Semnani-Azad. Stephanie Schuckers (CU)

Summaries:


Biometric Aging in Children – Phase II
D. Rissacher, S. Schuckers, L. Holsopple; 17S-03C
Study of biometric recognition in children has recently gained interest in order to support applications such as immigration, refugees, and distribution of benefits. We have completed two collections (with a third scheduled for April 2017) in ~175 children, ages 4 through 10 years old, as part of a cooperation with Potsdam Elementary School. Each collection is separated by 6 months and includes fingerprint, footprint, iris, face, voice, and ear. We propose to continue collection on these same children as well as add pre-kindergarten students each school year. The data will be analyzed towards multiple goals: 1) earliest age of modality’s viability (within ages studied), and 2) variability of modality with age. Additionally, this shareable dataset will enable research of models to account for age-variations and age measurement metrics.


Mitigate Compression Artifacts for Face in Video Recognition
Chen Liu (CU), Stephanie Schuckers (CU); 17S-05C
Face in video recognition (FiVR) is widely used in video surveillance and video analytics. Various solutions have been proposed to improve the performance of face detection, frame selection and face recognition in FiVR systems. But all these methods have a common inherent “ceiling”, which is defined by the source video’s quality. One key factor causing face image quality loss is video compression standards. In facing this challenge, we propose an innovative solution to mitigate the compression artifacts such as blocky, blurring and speckling effects, to improve the performance of FiVR system. In this project, first, we will analysis and quantify the effects of video compression (mpeg, mjpg, H264, etc.) on the FiVR performance. Compared with existing statistical methods, deep learning based approaches have shown impressive capability on abstracting both high-level and low-level features in vision tasks. So, in the second stage, we will employ deep learning based algorithms to mitigate artifacts in compressed input video. Different from existing artifacts reduction approaches which targeting reduce the loss between ground truth and compressed image/video-frame, we will compare the loss of HAAR, LBP, HOG, CNN based features which are directly related to FiVR. We anticipate this project’s outcome will make FiVR systems more adaptive to application scenarios such as video surveillance, video analytics with various compression qualities.


Single-shot contact-less fingerprints using ubiquitous devices
Natasha Kholgade Banerjee (CU), Sean K. Banerjee (CU), Stephanie Schuckers (CU), David Yambay (CU); 17S-07C
The project performs fingerprint recognition of contactless nail-to-nail fingerprints taken using low-cost cameras such as GoPro devices compared to contact-based fingerprints in existing databases taken using traditional scanners. The approach allows the reconstruction of all ten fingers and segmentation to form nail-to-nail images appropriate for fingerprint recognition. Fingerprints will be captured without contact with one or more ubiquitous devices and 3D fingerprints will be constructed using multiple images of all fingers taken using a standard 4-4-2 fingerprint capture. Fingerprints will be compared to single-press non-rolled scans as well as rolled fingerprint scans.


Creation and Implantation of a Test Target into a Test Phantom Finger for the Ultrasonic and Photoacoustic Characterization of an Biometric System
Kwang Oh (EE, UB), Jun Xia (BME, UB)
Phase 1 (2016-17) was centered on the replication of the dynamic features of the human finger for Liveness Detection. The completion resulted in a finger test phantom included dermotographic features (e.g., digital arteries, bone, fat, muscle, a 3D capillary network). Phase 2 (2017-18) of this project will be (1) implementing a test target internally to allow for the baselining of an ultrasonic biometric fingerprint subdermal imaging system and (2) using a photoacoustic imaging techniques to image feature sets such as capillaries, blood flow, and fingerprints.


A Dynamic Multi-Camera Topology-Aware Time-Tapestry for Surveillance
N. M. Nasrabadi (WVU), Domenick Poster (WVU)
The demand for multi-camera video surveillance is rapidly increasing. However, due to the large volume of cameras and data involved, it is difficult for a user to locate and follow activities of several suspicious individuals. Many monocular-video summarization (synopsis) methods have been proposed [1]-[2], which condenses the volume of a single video into short key contents by extracting several keyframes (clips) for tracking objects. In this proposal, we are interested to generate object-oriented video synopsis in the form of a time-tapestry for non-overlapping views of a multi-camera surveillance system. The proposed algorithm will track suspicious individuals (objects), annotated by a user in the master-camera, in different camera videos and condenses the extracted keyframes in the form of a time-tapestry for visualization. Each tapestry can be considered as one big video canvas dedicated for tracking one object. Tracking multiple objects will result in multiple tapestries where objects of interest are tracked and videos are condensed to help the user for fast analysis. After annotating a suspicious individual (or an important event), extracted features (classical descriptors or object attributes) from the object and the camera’s topology map, which is critical for coherent video tapestry, are used to track and re-identify the same individual in all the video cameras and produce a time-tapestry for each object. Each time-tapestry is updated dynamically to render a condensed version of each object-oriented activity from all the camera views. Novelty of our time-tapestry is that our video tapestry preserves and have the following important qualities: camera topology-aware, coherent, chronological, continuous and complete time activity of the object. Machine learning algorithms will also be developed to post-analyze the interactions between the objects within our time-tapestries.


Document Facial Image Restoration via Deep Learning & Variational Inpainting
Antwan Clark (WVU), Thirimachos Bourlai (WVU)
Facial recognition (FR) has advanced significantly for facial photos that are completely frontal, have good image quality, and contain minimal pose, illumination, and expression variation of the face within them [1, 2]. However, the development of an automatic algorithm that successfully matches degraded facial images of a subject against their higher resolution counterparts is also a challenge to these algorithms. An example scenario is document to live facial identification, where legacy high quality facial images, acquired by government agencies, are matched against facial photos from various identity documents (e.g. driver’s licenses, state identification, passports, and refugee documents). Here, the combination of lamination and security watermarks, which are document-related factors, present a challenge in terms of identification because these factors can distort the facial content of these images [1, 2]. Therefore, sophisticated processes are needed to mitigate these effects and employing variational inpainting methods have illustrated promise in improving overall image quality as well as improving facial recognition accuracy [3 – 5]. This proposal aims to explore variational image inpainting methods within the context of deep learning to improve restoration while evaluating their overall effectiveness. Deep learning algorithms have been making their mark and showing much promise within the machine learning and computer vision arenas. Therefore, considering both concepts together will provide robust methodologies for mitigating degradation of document images while improving facial recognition accuracy.


Exploring and Benchmarking Deep Learning Techniques for Biometric Applications
Guodong Guo (WVU)
Deep learning techniques have seen great applications in many real-world problems [1], e.g., from playing Go (AlphaGo), to skin cancer detection, and speech/object recognition. Machines based on deep learning can even outperform humans in some tasks. Deep learning networks are also applied to the Biometric field, such as face recognition. The learned deep features can do much better than traditional approaches to face matching. However, training a deep network is usually a very time-consuming process. Further, there are different hardware platforms, such as CPUs, and GPUs (GPUs have many models with different prices as well). There are also different software tools, such as Caffe, CNTK, MXNet, Tensor flow, and Torch. Even for the specific problem of face recognition, there are many deep networks developed, such as AlexNet, VGG, GoogleNet, and FaceNet. In training the deep networks, there are also many parameters to adjust. As a result, all of these make it very difficult for end users to select the appropriate hardware platform, software tools, network structures, and parameter settings (e.g., loss function, learning rate, batch size, etc.)


Latent Fingerprint Verification Using a Deep Siamese Neural Network
Jeremy Dawson, Nasser Nasrabadi
The classical approach for matching latent fingerprints (partial prints found on surface of objects) against a plain/rolled fingerprint is based on finding the corresponding matches between the minutiae from latent with those from legacy 10-print databases. The main challenge is that the number of available minutiae in the latent fingerprint is usually not sufficient to get a high-rank match to images in the 10-print gallery. Therefore, we are proposing to design a latent fingerprint verification algorithm that simultaneously exploits all the available fingerprint feature maps, such as ridge orientation, ridge frequency, ridge flow, ridge quality/reliability map, Gabor enhanced fingerprint image as well as the extracted minutiae. Our proposed latent fingerprint verification is based on a novel multi-feature Siamese deep neural network (DNN) architecture that operates simultaneously on multiple fingerprint feature maps. The proposed Siamese DNN verification algorithm will learn a multi-feature discriminative distance metric embedding that simultaneously maps all the input feature maps from latent and plain/rolled fingerprint into two low-dimensional feature vectors for comparison. The Siamese DNN consists of two sets of identical convolutional neural networks (CNNs) trained by minimizing a contrastive loss function which will fuse and force the feature maps of the latent and plain/rolled of the same finger to become closer in the embedded domain. There are three major technical contributions in this project: 1) DNN-based fusion of multiple fingerprint feature maps, 2) a multi-feature nonlinear Siamese distance metric learning, 3) fusion of minutiae-based and feature map-based latent fingerprint verification.


Restoration of Distorted Fingerprints by Deep Learning
Dawson (WVU) and N. M. Nasrabadi (WVU)

Automatic fingerprint technology has become a highly accurate method for identification of individuals for commercial as well as DoD applications. However, there still exists challenging problems with low-quality or distorted fingerprints. Degradation or distortion of fingerprint can be photometric (non-ideal skin conditions, dirty sensor surface, latent fingerprints) or geometrical (skin distortion due to uncooperative person) [1].  In this project, we are interested in developing an algorithm to correct for geometrical elastic deformations due to flexibility of the fingerprint or the lateral force or torque purposely introduced by an uncooperative person during finger printing. This kind of distortion can be seen in the classical FVC2004 DB1 fingerprint database, and a fingerprint matcher will not be able to identify the individual.  To solve this problem, a number of techniques have been developed that make the fingerprint matcher tolerant to distortion [2, 3] or learn the nonlinear deformation from a training set [1 ,4].  In this project, we are proposing to use a deep learning architecture (an auto-encoder) to learn to correct various types of geometrical distortion by using a large database consisting of pairs of distorted and normal fingerprints of individuals. The input to our deep auto-encoder will be a distorted fingerprint, and its output will be the rectified version of the normal fingerprint.  Our proposed data-driven auto-encoder not only learns implicitly the nonlinear distortion patterns, but also corrects for distortions that are similar to the training dataset. In this project we will use the available FVC2004 DBI and Tsinghua DF databases which have pairs of distorted and normal fingerprints.  Our approach is novel since no one has previously used deep learning auto-encoders for distorted fingerprint corrections.


Incorporating Biological Models for Iris Authentication in Mobile Environments
Bourlai and A. Clark (WVU)

For the past decade iris recognition technology has matured from access control to the security of mobile devices. However, recent studies demonstrate the impact of pupil dilation on the matching accuracy of iris recognition algorithms where investigations were held solely from dilation [4 – 6], deformation [1,2], template [3], and state [5] perspectives. Furthermore, working in mobile environments pose additional restrictions such as platform fragmentation and data limitations. Therefore, there is a need to collectively incorporate the physiological aspects of dilation while considering the additional constraints of mobile environments. This proposal aims to consider these perspectives in order to engineer a universal process for improved recognition within mobile environments. The results of this work can advance the iris recognition community via advancing current iris recognition algorithms that are effective across multiple technical environments. Furthermore, these results aim to provide additional insights into the performance metrics within the mobile framework.  The three main tasks are stated in the “Experimental Plan” section below.


Cross-spectral Face Recognition Using Thermal Polarization for Night Surveillance
M. Nasrabadi (WVU) and T. Bourlai (WVU)

Face Recognition (FR) in the visible spectrum is sensitive to illumination variations, and is not practical in low-light or nighttime surveillance. In contrast, thermal imaging is ideal for nighttime surveillance and intelligence gathering operations. However, conventional thermal imaging lacks textural details that can be obtained from polarimetric signatures. Such signatures can be used to infer face-based surface features and enhance night-time human identification. In this project, we are proposing to explore novel thermal polarimetric signatures to extract subtle surface features of the face to improve thermal cross-spectral FR performance. The state of polarization at each pixel can be represented by a Stoke vector representation, which is a four-component vector, whose elements are functions of the optical field polarization. The first component of the Stoke vector is the linear polarization (the image itself) and the remaining three components are the differences between the polarizations.  Our proposed algorithm is based on the framework of sparse theory, which uses a set of dictionaries (a multi-polarimetric dictionary), where each dictionary is dedicated for a particular stoke polarization component (stoke image). This multi-polarimetric dictionary will be used to jointly map the information in visible and all the polarimetric Stoke images into a common surrogate feature space (referred to as sparse coefficient vector). Then, a classifier will be designed in this common feature space, from a gallery of visible database, to identify the thermal polarimetric Stoke image probes that will be used for cross-spectral FR. The innovation in our approach is the use of polarimetric Stoke signatures and development of a common surrogate feature space to relate the visible to thermal polarimetric Stoke images. This project will assess the overall performance improvement in the use of polarimetric signatures for cross-spectral FR.


Cross-Age Face Recognition in Non-Ideal Images
Guodong Guo (WVU)

Face recognition (FR) performance is affected significantly by the aging in human faces. Aging is inevitable and continuous for all live people. Facial aging can change the facial appearance greatly, making it very challenging to match face images across age progression. While some progresses have been made over the past decade to deal with pose, illumination, and expression changes (PIE), it is still very hard to do cross-age face recognition. Further, it is even harder to perform face recognition with aging in non-ideal conditions. For example, when the facial aging is coupled with pose, illumination, and expression changes, it brings new challenges, which can be called A-PIE (aging, pose, illumination, and expression changes).

In this project, we propose to study a relatively new problem, called cross-age face recognition in non-ideal face images. We will study the influence of aging (A) on face recognition in a quantitative manner, coupled and/or decoupled with the PIE. The objective is to explore the aging effect on face recognition quantitatively, and compare to the traditional pose, illumination, and expression changes (PIE). The outcome of the project will have a significant impact on practical face matching, where a large number of non-ideal face images exist with aging.


Wardrobe Models for Long Term Re-Identification and Appearance Prediction
Prof. Napp, Prof. Sethlur, Prof. Govindaraju (UB)

Clothing has been extensively used as a soft biometric for tracking and re-identification (both explicitly. eg [1] and explicitly through color/texture appearance models [2]). We believe that this signal has not been fully exploited. Clothing, specifically a wardrobe (collection of clothes), could be used over much longer time scales than a typical tracking re-identification task. The proposal is to build an explicit wardrobe model and see how it can be used as a soft biometric signal in long-term and long-range identification tasks. Once a wardrobe model is learned it can be used in camera views where subjects are too small for other modalities.


Liveness Detection Competition 2017
Stephanie Schuckers, David Yambay, Mayank Vatsa, Afzel Noore (CU)

Fingerprint and Iris recognition systems are vulnerable to artificial presentation attacks, such as molds made of silicone or gelatin for fingerprint and patterned contacts for iris. Liveness Detection, or Presentation Attack Detection (PAD), has been a growing field to combat these types of attacks. LivDet 2015 showcased 13 total competitors for the fingerprint competition and 3 competitors for iris. We propose to host LivDet 2017 with a similar public dataset for training algorithms and systems. In addition, we propose to incorporate updated spoof attacks to showcase advances in the field of spoofing such as 3D printed molds for fingerprint and mobile capture for iris. We will continue testing for submitted fingerprint and iris full hardware/software systems with a particular emphasis on mobile devices. Analysis of performance will enable knowledge of the state-of-the-art in the field, as new technologies and algorithms evolve.


Impact of Cultural Factors in Trusting Biometric Technology
Zhaleh Semnani-Azad. Stephanie Schuckers (CU)

Societal acceptance of biometric technology is complex, and highly dependent on trust. People often perceive biometric technology as a ‘machine profiling’ entity where the person being profiled has no access to the knowledge (e.g. database) that is used to categorize that person. This element of biometrics can lower trust which is developed from transparency and disclosure of information. Trust captures the extent to which people are comfortable with being vulnerable to another entity, with the expectation that the entity will not exploit them. Yet, trust is highly contingent on cultural and societal norms. The limited work on trust in biometrics is mostly anecdotal and correlational patterns associated with familiarity and confidence in different types of biometrics. There are two general limitations of current literature. First, to our knowledge there is no research that systematically examines the impact of various cultural factors such as tightness versus looseness of social norms on general trust toward different types of biometrics. Most of the work on cultural influences on trust has been done in the context of interpersonal trust. Second, most of the very limited work studying trust in biometrics has been abstract and suggestive, without empirical validation. To address these issues, the overall objective of our research is to develop  a fundamental understanding of general principles and factors pertaining to trust in biometrics, and how trust mediates acceptance of biometrics across various cultural norms.