2011 Projects

Iris Segmentation Quality Analysis: Evaluation & Rectification (Phase 2)
Nathan Kalka, Bojan Cukic, and Arun Ross (WVU)

Effects of Environmental Conditions on Middle to Long Range Face Recognition in the Dark
Thirimachos Bourlai

Matching Face Images Acquired from Mobile Devices to a Large Gallery
Anil K. Jain (MSU)

Separating Overlapping Fingerprints and Palm‐prints
Anil K. Jain (MSU)

Multimodal Fusion for Stand‐off Identity and Intent for 10‐25 Metter
Stephanie Schuckers (Clarkson), Jeremiah Remus (Clarkson), William Jemison, and Judee Burgoon (U of A)

Automated Rigidity Detection in Automated Screening
Nathan Twyman, and Judee Burgoon (U of A)

Evaluating the Robustness of Eye Tracking to Mental and Physical Countermeasures
Ryan Schuetzler, Jeffrey Proudfood, and Jay Nunamker (U of A)

Heterogeneous Face Recognition
Anil K. Jain (MSU)

Generalized Additive Models for Biometric Fusion and Covariate Analysis
Mark Culp and Arun Ross (WVU)

Feasibility Study of an International Biometrics Data Portal
Michael Schuckers (St. Lawrence), Stephen Elliot (Purdue), Bojan Cukic (WVU) and Stephanie Schuckers (Clarkson)

Facial Metrology for Human Classification
Don Adjeroh, Thirimachos Bourlai and Arun Ross (WVU)

LiveDet II Fingerprint Liveness Detection Competition 2011
Stephanie Schuckers (Clarkson)

Post Mortem Ocular Biometrics Analysis
Reza Derakshani (U Missouri, Kansas City) and Arun Ross (WVU)

A Standardized Framework for a Heterogeneous Sensor Network for Real‐Time Fusion & Decision Support
Aaron Elkins, Doug Derrick, Jeff Proudfoot, Judee Burgoon and Jay Nunamaker (U of A)

Comparison of Methods for Identification & Tracking of Facial & Head Features Related to Deception & Hostile Intent
Judee Burgoon (U of A), Senya Polikovsky (U of Tsukuba), Dimitris Metaxas (Rutgers), Jeff Jenkins (U of A) and Fei Yang (Rutgers)

Establishing Deceptive Behavior Baselines for Eye‐Tracking Systems
Jay Nunamaker, Doug Derrick, Jeff Proudfoot and Nathan Twyman (U of A)

Summaries:

Iris Segmentation Quality Analysis: Evaluation & Rectification (Phase 2)

Nathan Kalka, Bojan Cukic, and Arun Ross (WVU)

Traditional iris recognition systems operate in highly constrained environments resulting in the  acquisition of an iris image with sufficient quality such that subsequent stages of processing perform successfully. However, when acquisition constraints are relaxed such as in surveillance or iris on the move systems, the fidelity of subsequent processing stages becomes questionable. Research has found that segmentation is arguably the dominant factor that drives the matching performance of iris recognition systems. Therefore, the ability to automatically discern whether iris segmentation failed prior to matching has many applications, including the ability to discard images with erroneous  segmentation. More importantly, it provides an opportunity to rectify failed segmentation. In this project we plan to leverage our work from Phase 1 into a unified framework capable of simultaneously evaluating and rectifying iris segmentation, for the purpose of improving iris recognition performance. The framework will be extended to include: (1) Novel iris segmentation evaluation strategies utilizing region, boundary, and contextual information. (2) An additional rectification strategy based on regularization of the iris segmentation search space. (3) A novel segmentation prediction model which automatically selects the segmentation methodology and algorithms most likely to correctly segment the input image.

Effects of Environmental Conditions on Middle to Long Range Face Recognition in the Dark

Thirimachos Bourlai

In military and security applications, the acquisition of face images is critical in producing key evidence for the successful identification of potential threats. The standoff distances most commonly used in face recognition (FR) systems are (a) short‐range (<33ft), suitable for applications such as identity verification at access points, or (b) middle‐range (<330ft), suitable for applications such as building perimeter surveillance. A middle‐range FR system capable of operating only in day‐time environments has been proposed recently. In this work we examine the effects of night‐time outdoor environmental conditions (illumination variation, temperature, and humidity) on the performance of FR algorithms using middle to long‐range Near‐Infrared (NIR) imaging systems. We will explore the distances in the range from 30 to ~650ft (the maximum capabilities of the available sensor). The proposed project will focus on answering the following questions: (1) Do night‐time outdoor environmental conditions affect recognition performance? (2) Which conditions affect recognition performance the most (e.g., high temperature and high humidity)? (3) What is the operational range that FR is feasible under the different conditions considered?

Matching Face Images Acquired from Mobile Devices to a Large Gallery

Anil K. Jain (MSU)

As the number of mobile devices equipped with digital cameras continues to increase, so does the opportunity to acquire face images using such devices. Civilians often use these devices to capture identifying evidence from a crime being witnessed. Law enforcement officers in many agencies are now being instructed to use such devices to acquire a face image of subjects when they do not possess identity information or there is doubt about the authenticity of such information. Because of the compact nature of mobile imaging devices, motion blur and focal distortions reduce the quality of face images. Thus, as the opportunity to acquire such useful information grows, face identification technology must be improved to meet these demands with algorithms tailored to match mobile face images to large legacy databases. Proposed research aims to provide solutions to improve the identification accuracy in these mobile identification scenarios.

Separating Overlapping Fingerprints and Palm‐prints

Anil K. Jain (MSU)

Latent prints (fingerprints and palmprints) lifted from crime scenes or IED fragments often contain overlapping prints. Overlapping latents constitute a serious challenge to state of the art fingerprint segmentation and matching algorithms, since these algorithms are designed under the assumption that latents have been properly segmented. The objective of this research is to (i) develop an algorithm to automatically separate overlapping latent prints into component latent prints, and (ii) demonstrate an improvement in matching accuracy as a result of this separation.

Multimodal Fusion for Stand‐off Identity and Intent for 10‐25 Metter

Stephanie Schuckers (Clarkson), Jeremiah Remus (Clarkson), William Jemison, and Judee Burgoon (U of A)

Because of limited resources (e.g. number and type of cameras, amount of time to focus on an individual, real‐time processing power), determining which individuals to focus on and for how long in surveillance situations is difficult. Anomalous behavioral cues may be considered by stand‐off systems for the risk assessment of risk of an individual. Benchmark datasets designing a stand‐off multimodal biometric system for purposes of determining identity and intent are needed. W propose to investigate the fusion approaches to measure face, iris, voice and heart patterns through experiments for identity and intent at distances from 10 to 25 meters. This research builds on the growing corpus of data, entitled Quality in Face and Iris Research Ensemble—Q‐FIRE dataset which includes the following: (1) Q‐FIRE Release 1 (made available in early 2010) is composed of 4T of face and iris video for 90 subjects out to 8.3meters (25 feet) with controlled quality degradation. (2) Release 2 is an additional 83 subjects with same collection specifications. Release 1 and 2 are currently being used by NIST in IREX II: Iris Quality Calibration and Evaluation (IQCE). (3) At the completion of the CITeR project in July 2011, an extension of the dataset will include unconstrained behavior of subjects on the same set of subjects, entitled Q‐FIRE Phase II Unconstrained, out to 8.3 meters. The goal of this project is to expand Q‐FIRE to include unconstrained subjects from 10 to 25 meters to characterize recognition at a distance as well as study fusion of multi‐modal standoff biometrics with behavioral cues. To support behavioral aspects, we propose to develop and incorporate cardiopulmonary measurements based on newly designed 5.8 GHz radar and our existing 2.45 GHz radar. Previous research has focused on relatively inexpensive, but large beam width, 2.45 GHz system and the quite expensive, with small beam width, 228 GHz radar (which also has regulatory hurdles). We hypothesize that 5.8GHz may have beam width for longer distances of interest, while still remaining relatively inexpensive and unregulated.

Automated Rigidity Detection in Automated Screening

Nathan Twyman, and Judee Burgoon (U of A)

This study will explore automated rigidity detection in rapid screening. Credibility assessment research has identified rigidity as an indicator of deception. Recently in a mock crime experiment, we used computer vision techniques to measure rigidity automatically and objectively. We also found rigidity could predict guilty knowledge during a Concealed Information Test (CIT). We propose an experiment designed to measure rigidity in a CIT designed for automated screening. An automated agent will conduct the CIT, thereby eliminating interviewer effects.

Evaluating the Robustness of Eye Tracking to Mental and Physical Countermeasures

Ryan Schuetzler, Jeffrey Proudfood, and Jay Nunamker (U of A)

Eye trackers have been evaluated recently for their effectiveness in rapid, non‐contact assessment of credibility and intent. Gaze patterns (Derrick, Moffitt, and Nunamaker 2010), eye blinks (Leal and Vrij 2010), and pupil dilation (Dionisio, Granholm, Hillix, and Perrine 2001) have all been examined and discovered to be somewhat effective. All of these have been able to achieve success rates of 75% or better in identifying deception. However, countermeasures have been shown to be effective against polygraph examinations.

Heterogeneous Face Recognition

Anil K. Jain (MSU)

Heterogeneous face recognition (HFR) involves matching two face images acquired in different modalities (e.g., visible vs. NIR image). In most HFR scenarios, the gallery images are standard photographs (visible band) and probe images are in non‐visible band (e.g. thermal images). This project focuses on heterogeneous face recognition where the probe images are from: (i) near‐ infrared (NIR) and (ii) thermal‐infrared. Improving face recognition performance in these two scenarios will have the following benefits. (i) Ability to match NIR face images to visible galley images is crucial in night time environments or environments with unfavorable illumination. (ii) Thermal face sensor, unlike the NIR sensor, is a passive sensing method which does not illuminate a person’s face. For this reason, in many applications thermal sensing is preferred over NIR sensing. Proposed research will have a profound impact on security by identifying criminals in adverse imaging conditions.

Generalized Additive Models for Biometric Fusion and Covariate Analysis

Mark Culp and Arun Ross (WVU)

This project concerns the task of assessing the potential impact of covariate factors such as image quality, age, gender, race, etc. on fusion performance. Most existing fusion approaches such as the sum rule and Bayesian methods ignore the covariate factors in model development. Consequently, covariate analysis is typically conducted after the application of the fusion rule, which is suboptimal for the overall goal of fusion performance. To this end, we propose generalized additive models that simultaneously account for covariate factors and their interactions, match scores and quality indices in one unified framework. The proposed model: (a) allows for testing the effect of individual covariates on overall fusion performance (e.g., effect of gender); (b) assesses the complexity of the fusion rule on the score given quality; (c) facilitates statistical interpretation; and (d) addresses a database’s “degree of difficulty” with pair wise model comparisons.

Feasibility Study of an International Biometrics Data Portal

Michael Schuckers (St. Lawrence), Stephen Elliot (Purdue), Bojan Cukic (WVU) and Stephanie Schuckers (Clarkson)

Use of biometric data for identification and verification has grown rapidly. Research is facing nontrivial problems in addressing scalability, individuality, variations due to age/ethnic/environmental/operation, privacy, novel sensors, impact of time between enrollment and future sample, fusion between modalities, etc. New and order of magnitudes larger datasets are needed to support such research. Currently, most investigators collect data, but may not be able to share it with other researchers or combine dataset from other researchers, due to the inherent privacy limitations imposed by collection protocols. The goal of this project is to begin developing requirements for a data portal which would enable researchers to share data, while provably maintaining appropriate levels of privacy. We propose to explore the issues surrounding the creation of a biometric data portal that will allow biometrics researchers to search and analyze biometric signals/images/samples across collections taken at different locations across the globe.

Facial Metrology for Human Classification

Don Adjeroh, Thirimachos Bourlai and Arun Ross (WVU)

Facial metrology refers to the extraction of geometric measurements of the human face based on certain landmark points (e.g., eyes, lip corners, etc.). For example, the distance between the eyes or the length and width of the face can be viewed as features constituting facial metrology. However, the biometric literature does not adequately discuss (a) the statistics of facial metrology as a function of gender or race; and (b) the potential of automatically extracting landmark points from human faces for performing metrology. The goal of this work is to conduct a statistical analysis of facial anthropometry and understand its potential in human classification and, perhaps, identification. In addition, algorithms will be developed to automatically extract facial landmarks at variable illumination levels, multiple distances and multiple spectral bands. The application of these algorithms can be expected to shed light on: (1) the effect of the above factors (distance, spectral bands, etc.) on the detection of facial landmarks; (2) identification of landmarks that are important for human identification in terms of race and gender; and (3) a statistical understanding of the role of facial metrology in human classification/identification. Applications of the study will include scenarios where a quick appraisal of the identity of a given individual may be needed before he or she reaches a certain check point; in facial forensics; and in cross‐spectral face recognition.

LiveDet II Fingerprint Liveness Detection Competition 2011

Stephanie Schuckers (Clarkson)

Fingerprint recognition systems are vulnerable to artificial spoof fingerprint attacks, like the molds made of silicone, gelatin or Play‐Doh, etc. Liveness detection or anti‐spoofing has been proposed to defeat these kinds of spoof attacks. In 2009, U of Cagliari and Clarkson University hosted the first competition to test software‐based liveness detection algorithms. This competition was organized according to the following: distribution of a dataset of spoof and live images for training, submission of a software algorithm which returned a liveness score, and evaluation of submitted algorithms on a sequestered dataset. Results were presented at ICIAP 2009 and resulted in a paper. Four groups submitted algorithms (two universities, two companies). The classification rate of the best algorithm achieved 2.7% error rate. The proposed second competition, LivDet II, reflects a growing interest in anti‐spoofing. In addition using public dataset for training software algorithms, we propose to perform testing of submitted system which include liveness combined with an overall hardware system. Commercial entities which have incorporated liveness have optimized these algorithms for their individual systems. Analysis of performance will enable knowledge of the state‐of –the‐art in the field.

Post Mortem Ocular Biometrics Analysis

Reza Derakshani (U Missouri, Kansas City) and Arun Ross (WVU)

In order to study the impact on match scores and other relevant ocular biometric identification metrics, we will study pre and post expiration subjects over a period of post mortem time span and via observing temporal sequence of ocular biometric captures, and compare them with similar live capture controls in different spectra, visible (RGB) and near infra red. Specific tasks include a) medical analysis & data collection; and b) biometric analysis.

A Standardized Framework for a Heterogeneous Sensor Network for Real‐Time Fusion & Decision Support

Aaron Elkins, Doug Derrick, Jeff Proudfoot, Judee Burgoon and Jay Nunamaker (U of A)

This study will begin development and testing of a scalable sensor network framework suitable for fusion and integration in expert and avatar based decision support systems. The emphasis will be placed on a modular agent based architecture that promotes standardized messaging and software interfaces for interoperability of diverse sensors. This study represents the first and necessary step towards integrating disparate sensors into a network for real‐time analysis and decision support.

Comparison of Methods for Identification & Tracking of Facial & Head Features Related to Deception & Hostile Intent

Judee Burgoon (U of A), Senya Polikovsky (U of Tsukuba), Dimitris Metaxas (Rutgers), Jeff Jenkins (U of A) and Fei Yang (Rutgers)

High‐speed (250 fps) recordings of participants from a mock smuggling experiment conducted at University of Tsukuba will be analyzed with methods and algorithms developed by Rutgers University and University of Tsukuba. The automated methods from the two laboratories will be compared on tracking accuracy, and both automated methods will be validated against human behavioral observation of facial/head kinesics.

Establishing Deceptive Behavior Baselines for Eye‐Tracking Systems

Jay Nunamaker, Doug Derrick, Jeff Proudfoot and Nathan Twyman (U of A)

We have conducted a pilot experiment that has used eye‐behavior to correctly classify 100% of individuals having concealed information about a fake, improvised explosive device (IED). This pilot experiment displayed altered images of the device to participants and those who had constructed the device scanned it differently than those that did not. For this new effort, we propose to examine three discrete questions. First, how persistent is the eye‐behavior effect? In the pilot study, the individuals were screened right after they constructed the bomb. We will examine the effect after 24+ hours to determine if it is persistent. Second, we will evaluate if the eye‐behavior is different if words are used instead of images. For example, do those that have built the IED view the word bomb in a series of words differently than those that have no knowledge of the IED? Third, we will examine if the effect can be used for places as well as objects. This will involve images of places that “guilty” people have seen and innocent have not. We will design a mock screening environment similar in appearance to checkpoints found in airports, border checkpoints, etc.