2010 Projects

A Study of MWIR for Face Recognition & Liveness
Thirimachos Bourlai, Arun Ross and Lawrence Hornak (WVU)

Cross‐Age Face Recognition Based on a Facial Age Estimation Scheme
GuodongGuo, Arun Ross and Bojan Cukic (WVU)

Enhancement & Quality Assessment Schemes for Challenging DNA Sample Analysis
Jeremy Dawson, Arun Ross, Lawrence Hornak (WVU), Tina Moroose (WVU Forensic Science) and Stephanie Schuckers (Clarkson)

Optimizing the Design of Large Scale Biometric Security Systems
Bojan Cukic, T. Menzies (WVU) and Stephanie Schuckers (Clarkson)

Latent Fingerprint Enhancement
Anil K. Jain (MSU)

Dyadic Synchrony as a Measure of Trust & Veracity
Norah Dunbar (Oklahoma), Matthew Jensen, Judee Burgoon (U of A) and Dimitris Metaxax (Rutgers)

Improving Information Security through Authentication Technology
Jeffrey Jenkins, Grayson Ross, Alexandra Durcikova and Jay Nunamaker (U of A)

Temporal Alignment of Psychophysiological Behavioral Indicators
Kevin Moffitt, Zhu Zhang and Judee Burgoon (U of A)

Non‐cooperative Biometrics at a Distance
Jeremiah Remus and Stephanie Schuckers (Clarkson)

Iris Segmentation Quality Analysis: Prediction and Rectification
Bojan Cukic, Nathan Kalka, Arun Ross and (WVU)

Impact on Age & Aging on Iris Recognition
Stephanie Schuckers (Clarkson), Jeremiah Remus, Nadya Sazonova, Lawrence Hornak and Arun Ross (WVU)

Multimodal Fusion Vulnerability to Non‐Zero Effort (Spoof) Imposters
Stephanie Schuckers (Clarkson), Arun Ross (WVU) and Bozhao Tan (Clarkson)

Detecting, Restoring & Matching Altered Fingerprints
Anil K. Jain (MSU) and Arun Ross (WVU)

SPLICE: Integrating Agent99, LIWC & Building an Accessible Platform for Future Tool Building
Kevin Moffitt (UA), Dr. Judee K. Burgoon (UA), and Jeff Jenkins (UA)

Identifying Hidden Patterns from Facial Expressions
Koren Elder (UA‐CMI), Nicholas Michael (Rutgers), Aaron Elkins (UA‐CMI), Judee Burgoon (UA‐CMI), Dimitri Metaxas (Rutgers) and Magnus Magnusson (U Iceland)

Animating the Automated Deception Analysis Machine (ADAM)
Doug Derrick, Koren Elder, Jeff Jenkins, and Judee Burgoon (UA‐CMI)

Automatic Deception Systems: To Believe or Not to Believe
Aaron Elkins, Nathan Twyman and Judee Burgoon (UA‐CMI)

 

Summaries:

 

A Study of MWIR for Face Recognition & Liveness

Thirimachos Bourlai, Arun Ross and Lawrence Hornak (WVU)

Middle Wave Infrared (MWIR, 3‐5μm) is interesting for biometric recognition as it has both reflective and emissive properties. Face recognition in MWIR did not yield promising results (DARPA Human ID Project) due to limitations of the sensor technology available at that time (2001‐2005). Given the improvements in MWIR technology in recent years, higher resolution and thermal sensitivity, the proposed project focuses on answering the following questions: (1) Can we extract novel features (facial vein patterns or other subcutaneous information) in MWIR that can be exploited for face recognition? (2) Can these MWIR features be reliably used for liveness detection? (3) Can we match MWIR face images against visible images? (4) Can system performance improve when fusing MWIR and visible imaging modalities?

 

Cross‐Age Face Recognition Based on a Facial Age Estimation Scheme

GuodongGuo, Arun Ross and Bojan Cukic (WVU)

Facial aging can degrade the performance of face recognition. A typical approach to cross‐age face recognition is to synthesize new faces at all possible ages. This approach is slow and inefficient, especially when working on a large‐scale face database. Recently, human age estimation has become an active research topic. In this project, we investigate how age estimation can help cross‐age face recognition. The basic idea is to estimate the age of a test face and to synthesize a face image only at the estimated age rather than generating face images at all possible ages. If this method is successful, it will make cross‐age face recognition system computationally efficient and practical.

 

Enhancement & Quality Assessment Schemes for Challenging DNA Sample Analysis

Jeremy Dawson, Arun Ross, Lawrence Hornak (WVU), Tina Moroose (WVU Forensic Science) and Stephanie Schuckers (Clarkson)

Current rapid DNA analysis systems are based on the miniaturization of standard processing steps and equipment and the use of commercially available reagent kits. Preliminary work indicated several challenges associated with DNA profiles extracted from degraded samples, low‐copy‐number DNA, and mixtures. In traditional DNA analysis these issues can often be overcome through human oversight and additional time for processing. As rapid DNA systems move from the realms of forensic science into automated biometric screening, these challenges are compounded by system architectures and processes designed to further reduce device throughput times. Our project will advance signal processing methods that will enable the evaluation and extraction of DNA profile information from challenging samples. This approach offers a means of pushing beyond the barriers currently limiting rapid DNA stems, furthering the realization of molecular biometrics systems capable of fulfilling the requirements of rapid, tiered screening scenarios.

 

Optimizing the Design of Large Scale Biometric Security Systems

Bojan Cukic, T. Menzies (WVU) and Stephanie Schuckers (Clarkson)

We have recently developed a model based analysis method that allows system designers and policy makers to understand the interplay between biometric match rates (corresponding to specific thresholds) and passenger throughput rates at US Visit – type of international border crossings. But in general, understanding the implications of such tradeoffs early and throughout the system development lifecycle is difficult. The goal of the proposed project is to further automate the analysis of large scale biometric system designs. We will utilize our Layered Queuing Network model of the US Visit system and improve its fidelity to reflect the architecture deployed in the field. Since the model executes quickly (we receive estimates for expected traveler wait times, lengths of queues, throughputs for accessing all biometric databases and watch lists, etc., in less than a second), we can explore a large space of system engineering alternatives. Through thousands of controlled parameter changes, guided by a discrete optimization method, we systematically evaluate the cost‐benefits of potential system improvements, as well as their relationship to threat levels and the risks of accepting impostors at ports of entry.

 

Latent Fingerprint Enhancement

Anil K. Jain (MSU)

An irreplaceable functionality of fingerprint recognition is its capability to link latent fingerprints found at crime scenes to suspects previously enrolled in a database of full (plain or rolled) fingerprints. Compared to full prints, which are captured in an attended mode, latents have smudgy and blurred ridge structures, include only a small finger area, have large onlinear distortion, and contain background lines and characters, or even other prints. Due to poor quality of the latents, automatic feature extraction is a challenging problem. This proposal will design and implement techniques (a) for latent fingerprint enhancement with minimal manual markup by utilizing an orientation field model, (b) to suppress the effect of background noise, and (c) to combine fingerprint ridge enhancement with background noise removal to improve the matching accuracy. This will help achieve the goal of lights out mode for latent matching.

 

Dyadic Synchrony as a Measure of Trust & Veracity

Norah Dunbar (Oklahoma), Matthew Jensen, Judee Burgoon (U of A) and Dimitris Metaxax (Rutgers)

Most deception detection studies have examined either the behaviors of the deceiver or the strategies of the interviewer but few have examined the dyadic variables that combine the deceiver and interviewer such as synchrony, rapport, coordination or reciprocity.

 

Improving Information Security through Authentication Technology

Jeffrey Jenkins, Grayson Ross, Alexandra Durcikova and Jay Nunamaker (U of A)

Information security garners immense pressure in organizations, resulting in financial loss, damaged reputation, and legal sanctions when breached. Most security breaches are a result of human negligence—a problem that can be alleviated through the use of identification technologies. This study will evaluate the impact of fingerprint scanners and RSA SecurID tokens on IS security, users’ cognitive effort, and secure behavior in a mock‐corporate environment. This research will aid researchers in tailoring identification technologies to improve secure behavior in corporations.

 

Temporal Alignment of Psychophysiological Behavioral Indicators

Kevin Moffitt, Zhu Zhang and Judee Burgoon (U of A)

The UA Mock Crime study has yielded time‐coded data from multiple sensors including blood pressure, kinesics, blinks, respiratory rate, and facial movements. Missing are time‐ coded transcripts from the interviews. This project proposes to align time‐coded transcripts with our current dataset. Knowing what a person is saying when their heart rate rises, or when they raise their eyebrows, for example, will help us to interpret our current dataset and give us a more complete picture of what a deceiver is experiencing and thinking.

 

Non‐cooperative Biometrics at a Distance

Jeremiah Remus and Stephanie Schuckers (Clarkson)

Human identification at a distance is an area of growing need and importance. To enable biometric identification at a distance, the growing consensus is that a multi‐modal approach for measuring biometric information is needed. In addition to measuring traditional biometric information (face and iris), it may be necessary to consider other signatures that can be easily gathered, such as thermal signatures, gait, soft biometrics, ear, and speech, that may contain useful identifiers. We investigate the sensitivity of a suite of biometrics that would comprise a multimodal dataset to standoff, non‐cooperative collection conditions. In early 2010, we completed a collection of face and iris video out to 25 feet with quality degradation controlled at the acquisition level (Quality in Face and Iris Research Ensemble—Q‐FIRE). The data is currently used by NIST in IREX II: Iris Quality Calibration and Evaluation (IQCE). The goal of this project is (1) to expand this dataset to include unconstrained subject positioning on same set of subjects, (2) to develop a better understanding of the primary factors that determine the quality of various standoff biometrics, and (3) study fusion of multi‐modal standoff biometrics to increase classifier confidence.

 

Iris Segmentation Quality Analysis: Prediction and Rectification

Bojan Cukic, Nathan Kalka, Arun Ross and (WVU)

Arguably the most important task in iris recognition systems involves localization of the iris, a process known as segmentation. Research has found that segmentation results are a dominant factor that drives iris recognition matching performance. The ability to automatically discern whether iris segmentation failed prior to matching has many applications, including the ability to discard images with erroneous segmentation, but more importantly, provides an opportunity to rectify failed segmentation. This can be further utilized in multi‐modal fusion algorithms where quality information is employed to help ascertain match score confidence. In this project, we design a segmentation quality metric capable of predicting and rectifying erroneous iris segmentation. Our quality metric will provide salient information which we can leverage in the selection of appropriately robust iris segmentation algorithm. Alternatively, we can use this salient information to rectify segmentation in an online manner depending on the degree in which segmentation failed. The designed metric will be able to operate independently of the segmentation algorithm being deployed.

 

Impact on Age & Aging on Iris Recognition

Stephanie Schuckers (Clarkson), Jeremiah Remus, Nadya Sazonova, Lawrence Hornak and Arun Ross (WVU)

There has been limited research conducted to assess the impact of matching time‐lapsed iris images (“aging”) and ageinduced changes in iris (“age”) on the performance of an iris recognition system. More recent work by Baker et al has shown evidence that the distribution of genuine match scores significantly changes as the time between samples increases (up to four years). However, their research was conducted on a small dataset consisting of less than 30 individuals. While the ideal scenario would be to study a large group of individuals over a significant portion of their lifespan, the practicality and expense of this makes it difficult and expensive. Here we study the impact of age and aging on iris systems in a three pronged approach based on retrospective analysis of data, collection of new data, and establishing a relationship with the larger medical community that regularly collects data on a much larger scale for research purposes. In addition, we devise encoding and matching algorithms to minimize the impact of aging and age on performance of iris recognition.

 

Multimodal Fusion Vulnerability to Non‐Zero Effort (Spoof) Imposters

Stephanie Schuckers (Clarkson), Arun Ross (WVU) and Bozhao Tan (Clarkson)

Multimodal biometric systems have been suggested as a way to defeat spoof attacks. While intuitively the assumption is made that a person must spoof all modalities in the system, no research has considered the case where only one or two modalities are spoofed. In our preliminary work, we consider the performance of fusion strategies if one sample presented is spoofed successfully while the remaining two are not spoofed at all, i.e. the attacker uses his or her biometric sample. We repeat this for the case when two samples are successfully spoofed. We found vulnerabilities in multimodal systems to a partial spoof attack and that these vulnerabilities depend on the fusion strategies, as well as selection of the operating point, as strong factors balancing performance of protection from spoofing. In this work we study standard fusion methodologies and their relative vulnerability to spoof attack, as well as develop fusion methodologies that utilize liveness scores to minimize the threat of spoofing.

 

Detecting, Restoring & Matching Altered Fingerprints

Anil K. Jain (MSU) and Arun Ross (WVU)

The success of fingerprint recognition systems in accurately identifying individuals has prompted some individuals to engage in extreme measures for the purpose of circumventing the system. For example, in a recent US‐VISIT Workshop, the problem of fingerprint alteration was highlighted. The primary purpose of fingerprint alteration is to evade identification using techniques ranging from abrading, cutting and burning fingers to performing plastic surgery. The use of altered fingerprints to mask one’s identity constitutes a serious “attack” against a border control biometric system since it defeats the very purpose for which the system is deployed in the first place, i.e., to identify individuals in a watch‐list. It should be noted that altered fingers are different from fake fingers. While fake fingers are typically used by individuals to adopt another person’s identity, altered fingers are used to mask one’s own identity. Here, we design and evaluate 1) automated methods to analyze a fingerprint image and detect regions that may have been altered by the subject, 2) image‐processing methods to reconstruct the altered regions in the fingerprints in order to enhance the overall fingerprint quality whilst generating an image that is biologically tenable and 3) matching methods that can successfully match altered fingerprints against their unaltered mates.

 

SPLICE: Integrating Agent99, LIWC & Building an Accessible Platform for Future Tool Building

Kevin Moffitt (UA), Dr. Judee K. Burgoon (UA), and Jeff Jenkins (UA)

As researchers continue to automate and extend the extraction of linguistic cues for deception detection and authorship identification to new domains, it will be important to choose a platform that is highly customizable, extensible, and easyto‐ use. This project begins the development of an integrated system of automated deception detection tools called SPLICE (Structured Programming for Linguistic Cue Extraction) that meets those criteria. We will begin developing SPLICE by integrating Agent99 cues and LIWC into one common tool and interface. SPLICE will be built using the Python programming language ‐ a flexible, relatively easy‐to‐learn programming language. The second part of this project makes SPLICE accessible as a Web Service using the REST architecture. The result will be a highly extensible and flexible tool to meet future researchers’ needs.

 

Identifying Hidden Patterns from Facial Expressions

Koren Elder (UA‐CMI), Nicholas Michael (Rutgers), Aaron Elkins (UA‐CMI), Judee Burgoon (UA‐CMI), Dimitri Metaxas (Rutgers) and Magnus Magnusson (U Iceland)

This study applies computer vision techniques to videotaped interviews to map facial expressions onto interviewee veracity. Machine learning models are trained to identify different combinations of facial movements and different emotion expressions (e.g. anger, contempt, fear) to determine which ones are associated over time with interviewee veracity. Alternative models are developed from Active Shape Models (ASMs), the Facial Affect Coding System (FACS) and Theme, a software program for uncovering hidden patterns in data.

 

Animating the Automated Deception Analysis Machine (ADAM)

Doug Derrick, Koren Elder, Jeff Jenkins, and Judee Burgoon (UA‐CMI)

This study integrates an embodied avatar with the Automated Deception Analysis Machine (ADAM) and run a pilot experiment using sixty subjects to test the new agent and the accuracy of the agent recommendations. A participant fills out a pre‐survey and then interacts with the system. The avatar asks the person questions and the person types in their responses. The system adapts its interaction depending on the individual differences of the system users and the metrics collected from the responses. Each response is evaluated using the following three factors: 1) Lexical features: diversity, word count, punctuation, average word length, etc; 2) Time: time to create the message, response latency, etc; 3) Edits: number of time backspace is pressed, number of time delete key is pressed, how often is Ctrl‐X used, and what was deleted. An inference engine determines the next question in the script. For example, if the response was judged to be vague (few words), the system may ask a follow‐up, more probing question on the same topic or if deception was suspected, a rephrasing of the question may be asked in order to elicit additional data to measure.

 

Automatic Deception Systems: To Believe or Not to Believe

Aaron Elkins, Nathan Twyman and Judee Burgoon (UA‐CMI)

This study explores the interactions between humans and automated deception detection systems. Recent CITeR research has shown that deception detection accuracy does not necessarily improve when a person is given recommendations from a system. Automated deception detection systems are continuously increasing in sophistication and ability to generate reliable recommendations, but these systems are not useful if the recommendations are not incorporated in the human’s decision. This study tests the proposition that human experts may feel threatened or defensive when the system recommendation is contrary to their own judgment. This research investigates a method of encouraging a more objective appraisal of system recommendations and investigates the relationship of more objective appraisals to perceived system credibility and overall accuracy.