Detecting and Extracting Macro‐Features in Iris Images
Arun Ross, Larry Hornak and Xin Li (WVU)
Matching and Retrieving of Face Images Based on Facial Marks: Phase 2
Anil K. Jain (MSU)
Models for Age Invariant Face Recognition
Anil K. Jain (MSU)
BioFuse: A Matlab™ Platform for Designing and Testing Biometric Fusion Algorithms
Arun Ross (WVU) and Anil K. Jain (MSU)
Large‐scale Evaluation of Quality Based Fusion Algorithms
Bojan Cukic, Nathan Kalka (WVU) and Anil K. Jain (MSU)
PRESS (Program for Rate Estimation and Statistical Summaries) version 2.0
Micheal Schuckers (St. Lawrence University) and Daqing Hou (Clarkson University)
Economical, Unobtrusive Measurement of Postural Correlates of Deception
Christopher Lovelace, Reza Derakhshani, Gregory King (UMKC) and Judee Burgoon (U of A)
The Effect of Power and Modality on the Detection of Deception
Norah Dunbar, Matthew Jensen (Oklahoma) and Judee Burgoon (U of A)
Linguistic Dynamics in Criminal Interviews
Matthew Jensen, Norah Dunbar (Oklahoma), Judee Burgoon (U of A) and Stan Slowik
Observational Coding of Deceptive and Truthful Interviewees from Varied Cultural Orientations
Judee Burgoon (U of A), Norah Dunbar and Matthew Jensen (Oklahoma)
Application of Automated Linguistic Analysis to Deception Detection in 911 Homicide Calls
Mary Burns, Kevin Moffitt, Judee Burgoon, Jay Nunamaker (U of A) and Tracy Harpster (Moriane Police Department, Dayton, OH)
Using Connectionist Modeling to Automatically Detect Facial Expression Cues in Video
Koren Elder, Aaron Elkins, Judee Burgoon (U of A) and Nicholas Michael (Rutgers)
Collaborative Acquisition of Face Images Using a Camera Sensor Network
Vinod Kulathumani, Arun Ross and Bojan Cukic (WVU)
LiveDet 2009‐Fingerprint Liveness Detection Competition 2009
Stephanie Schuckers and Bozhhao Tan (Clarkson)
On the Super‐Resolution of Iris Images from Video Streams
Patrick Flynn (Notre Dame) and Arun Ross (WVU)
Unconstrained Face Recognition under Non‐Ideal Conditions
Arun Ross (WVU) and Anil K. Jain (MSU)
Phase 1 – Participation in the Multi‐Biometric Grand Challenge
Stephanie Schuckers (Clarkson), Natalia Schmid (WVU) and Besma Abidi (UTK)
Evaluating and Integrating Speech Recognition Software into Agent99 for Real‐Time Deception Detection
Kevin Moffit, Sean Humphries, Jay Nunamaker, Judee Burgoon and Pickard Burns (U of A)
Handedness in Detecting Deception in Cultural Interviews
Mathew Jensen (Oklahoma), Thomas Meservy (UM) and Judee Burgoon (U of A)
Looks like Me: Cultural Avatars
Koren Elder, Mark Patton, Aaron Elkins, Carl and Judee Burgoon (U of A)
Summaries:
Detecting and Extracting Macro‐Features in Iris Images
Arun Ross, Larry Hornak and Xin Li (WVU)
The iris exhibits a very rich texture consisting of “pectinate ligaments adhering into a tangled mesh revealing striations, ciliary processes, crypts, rings, furrows, a corona, sometimes freckles, vasculature, and other features”. The randomness of the iris texture and its apparent stability render it a useful biometric. Most iris‐based systems use the global and local texture information of the iris to perform matching. The anatomical structure within the iris is seldom used in the matching process. This project pursues the path of inquiry first identified in the Phase II multispectral project to design methods to extract and utilize the macro‐features that are visible on the stroma of the iris. The goal is to design a system that can extract and match features across two images of the iris. The work will: (a) provide an alternate way to match iris images; (b) facilitate ways to visually compare two iris images thereby allowing forensic experts to determine the degree of similarity between two iris photographs; (c) potentially be used to locate and retrieve iris images possessing specific macro‐features from a large database; and (d) provide an understanding of the individuality of the iris.
Matching and Retrieving of Face Images Based on Facial Marks: Phase 2
Anil K. Jain (MSU)
Facial marks have been studied as means of supplementing global shape and texture information used in commercial face recognition systems. The ability to automatically extract these marks or artifacts from facial images in large face databases will assist law enforcement agencies to rapidly locate human faces. In phase 1, we developed a prototype system to (i) automatically extract facial marks, (ii) designed a simple decision rule for matching facial marks, and (iii) developed a fusion rule to combine mark‐based matcher with a leading commercial face matcher, and (iv) showed improved matching performance on a small face database. In Phase 2 we extend our previous study by: (i) incorporating a 3D face model to achieve pose invariance, (ii) enhance the automatic mark extraction method so that it can be applied to low‐resolution video frames, (iii) investigate various fusion rules to combine these distinguishing marks with a commercial face matcher, and (iv) show performance improvement on a large (10K) database of operational images.
Models for Age Invariant Face Recognition
Anil K. Jain (MSU)
Facial aging refers to the problem in face recognition where the time difference between the enrolled face image and the query image of the same person is large (typically, several years). It is one of the major sources of performance degradation in face recognition. An age invariant face recognition system would be useful in many application domains such as locating missing children, screening, and multiple enrollment detection. However, facial aging has not received adequate attention until recently and the proposed aging models to compensate for age are still under development. Aging related facial changes appear in a number of different ways: i) wrinkles and speckles, ii) weight loss and gain, and iii) change in shapes of face primitives (e.g., sagged eyes, cheeks, or mouth). These aging patterns can be learned by observing changes in facial appearance across different ages from a set of training subjects. This work will design and implement techniques to (a) perform facial aging modeling in 3D domain using both shape and texture, (b) build separate facial aging models for different genders and ethnicities (Caucasian, African American, and Asian), and (c) use the aging model to compensate for aging to enhance matching accuracy using a commercial face matcher.
BioFuse: A Matlab™ Platform for Designing and Testing Biometric Fusion Algorithms
Arun Ross (WVU) and Anil K. Jain (MSU)
Multibiometric systems consolidate evidence provided by multiple sources to establish the identity of an individual. The design and performance of a multibiometric system is dictated by several factors, including the number of biometric sources to be combined, the fusion architecture (e.g., serial versus parallel), the mode of operation (e.g., verification versus identification), the cost and response time of the system, and the fusion mechanism employed. Recent research in multibiometrics has resulted in the development of several algorithms for performing fusion at the data, feature, score, rank, and decision levels. Covariates such as data quality and soft biometrics have also been incorporated in the fusion framework resulting in improved matching accuracy. This work seeks to build a software platform that would provide its user with the ability to experiment with a large number of fusion methods and evaluate the relative performance of these methods on multiple datasets. Since most biometric researchers in academia utilize the Matlab™ platform to develop and test algorithms, the proposed software will be designed in such an environment. However, the graphical user interface (GUI) offered by the tool will be accessible by the broader end‐user biometrics community. The salient features of the proposed environment include (a) access to a wide gamut of fusion techniques and methods to address missing/incomplete data; (b) a platform to evaluate multiple competing fusion techniques as a function of covariates; (c) the capability to incorporate fusion modules developed by other researchers; and (d) the development of a wiki website to allow for the collaborative editing of the software.
Large‐scale Evaluation of Quality Based Fusion Algorithms
Bojan Cukic, Nathan Kalka (WVU) and Anil K. Jain (MSU)
Performance improvements attributed to fusion are significant and major biometric installations have deployed or plan to deploy multimodal fusion to improve identification accuracy. The deployed fusion algorithms mostly operate at the matching score level and do not always incorporate biometric quality estimates. State of the art multimodal fusion schemes adaptively incorporate quality estimates to further improve the performance. Nevertheless, due to lack of adequate volume of training data, inconsistencies in the acquisition of training and testing data, and highly conflicting unimodal evidences, these systems often do not necessarily achieve “optimality”. The goal of this project is to perform large scale evaluation of face / iris / fingerprint quality based fusion algorithms. We have access to large fingerprint and iris databases for analysis (over 8,000 subjects in each modality). This data comes from collections known to have inconsistent quality. We also intend to acquire as many face images as possible from known collections to chimerically augment the fingerprint and iris data. We will utilize commercial matchers and publicly available quality estimation algorithms. We will evaluate quality based likelihood ratio based fusion algorithm, bayesian belief network fusion algorithm, SVM‐ likelihood ratio algorithm, as well as well known score level fusion approaches which do not include quality scores (sum rule, max rule, etc). For quality evaluation we plan to use WVU, MSU and our implementation of Daugman’s algorithm for iris, NFIQ and BAH algorithms for fingerprints, FaceIt and BAH for face. Empirical results are expected to provide statistically significant evaluations that can guide future research and deployment decisions in multi‐biometric fusion.
PRESS (Program for Rate Estimation and Statistical Summaries) version 2.0
Micheal Schuckers (St. Lawrence University) and Daqing Hou (Clarkson University)
Several years ago, we developed (with CITeR funding) the PRESS tool which is now in version 1.1. This tool has assisted many organizations in assessing and evaluating tests for biometric identification including TSA, NBSP, Authenti‐Corp, Mitre, NIST. Several developments have occurred in statistical methods for biometrics since that time. We will add these new methods as well as make improvements on the existing methodology used in PRESS 1.1. These improvements include adding new statistical methods for FTE, FTA, MTT and improved methods for FMR, FNMR and ROC’s. Further, we will improve the existing graphical interface.
Economical, Unobtrusive Measurement of Postural Correlates of Deception
Christopher Lovelace, Reza Derakhshani, Gregory King (UMKC) and Judee Burgoon (U of A)
This will be a novel adaptation of force platform technology to the measurement of postural shifts that accompany deception. The straightforward, inexpensive, and concealed ground force platform technology, when coupled with modern non‐linear, data‐driven signal classification methods, has the potential to provide efficient, reliable, and unobtrusive identification of deception in a security screening environment.
The Effect of Power and Modality on the Detection of Deception
Norah Dunbar, Matthew Jensen (Oklahoma) and Judee Burgoon (U of A)
Many field situations such as rapid screening at portals or the educing of information from witnesses or suspects involves a power differential between the interviewer and the subject. Using Computer‐Mediated Communication (CMC) in these areas for initial screening interviews reduces the burden on human, physically present interviewers.
Linguistic Dynamics in Criminal Interviews
Matthew Jensen, Norah Dunbar (Oklahoma), Judee Burgoon (U of A) and Stan Slowik
The proposed research will examine the dynamics of deceptive language and content in high‐stakes interviews. We expect that linguistic and context‐independent content from high‐jeopardy interviews will discriminate truthful or apparently truthful responses from ones that indicate deception and that deceptive strategies such as hedging, ambiguity, and equivocation will vary across the course of an interview. This work will explore whether particular phases of an interview (early, late) are more diagnostic than others regarding an interviewee’s truthfulness.
Observational Coding of Deceptive and Truthful Interviewees from Varied Cultural Orientations
Judee Burgoon (U of A), Norah Dunbar and Matthew Jensen (Oklahoma)
It is more difficult for examiners to detect deception by individuals from disparate cultures because if deceiver and receiver differ even in their definitions of what constitutes deception, then they may communicate in ways that complicate detection accuracy. We will examine this issue at a global, impressionistic level that is similar to the unaided general impressions examiners form when conducting screenings and pretest interviews.
Application of Automated Linguistic Analysis to Deception Detection in 911 Homicide Calls
Mary Burns, Kevin Moffitt, Judee Burgoon, Jay Nunamaker (U of A) and Tracy Harpster (Moriane Police Department, Dayton, OH)
This work will analyze calls by ‘guilty’ and ‘innocent’ callers for a proof of concept for detection and of deception/guilt. We will apply our automated linguistic cue analysis and automated transcription tools to transcripts and/or audio tapes of 911 calls to determine guilt or innocence of the caller. Advantages of analyzing 911 statements vs. person of interest statements or interviews conducted by law enforcement officers: (1) 911 statements represent the initial contact between a caller and an emergency response team, including law enforcement, leaving callers little chance to rehearse a false story; (2) because 911 operators are not perceived by callers law enforcement, they may exhibit less controlled behavior and more cues of deception; (3) due to the temporal immediacy of the crime to the 911 call, there may be more active stress on the caller which may cause the caller to ‘leak’ more clues unintentionally; (4) because 911 operators do not interrogate, the statements are objective.
Using Connectionist Modeling to Automatically Detect Facial Expression Cues in Video
Koren Elder, Aaron Elkins, Judee Burgoon (U of A) and Nicholas Michael (Rutgers)
This study will build a connectionist model, using Facial Metrics and the Facial Action Coding System (FACS) to automatically identify facial expressions in videos. The connectionist network model will be trained to identify different emotion expressions (e.g. surprise, anger, happiness, suspicion, neutral, etc.). The model will then be used to map the facial expressions of interviewees captured on video. In the future, these mappings can be used to determine which expressions are reliable indicators of truth or deception during interviews or interrogations.
Collaborative Acquisition of Face Images Using a Camera Sensor Network
Vinod Kulathumani, Arun Ross and Bojan Cukic (WVU)
Network of image sensors combined with biometric systems can form the building block for a variety of surveillance applications such as airport security, protection of critical infrastructures and restricted access to guarded assets. In this project, we focus on the collaborative acquisition of biometric data for face recognition using a network of image sensors. One of the scenarios for using such a system is the distributed in‐network detection of an event of interest and simultaneous face recognition in a dynamic scene. As a basic step towards building such a system, we focus on the following problem statement: Given a set of n cameras deployed to monitor a given area, (0) determine optimal positioning of cameras to maximize biometric information obtained when a single person enters the area, (1) design a distributed algorithm to coordinate the cameras to capture partial views of the face that maximize biometric content and (2) design a distributed algorithm to acquire partial snapshots to construct the full facial image using mosaicing techniques.
LiveDet 2009‐Fingerprint Liveness Detection Competition 2009
Stephanie Schuckers and Bozhhao Tan (Clarkson)
Fingerprint recognition systems are vulnerable to artificial spoof fingerprint attacks, like the molds made of silicone, gelatin or Play‐Doh, etc. “Liveness detection” has been proposed to defeat these kinds of spoof attacks. We propose to host the first fingerprint liveness detection competition (LivDet2009) in ICIAP 2009. This competition will be hosted in collaboration of University of Cagliari (Gian Luca Marcialis, Fabio Roli, Pietro Coli), also active researchers in liveness detection. The goal is this competition is to compare different methodologies for software‐based fingerprint liveness detection with a common experimental protocol and large liveness dataset. The ambition of the competition is to become a reference event for academic and industrial research. This competition is open to all academic and industrial institutions which have a solution for software‐based fingerprint vitality detection problem. Each participant is invited to submit its algorithm in Win32 console application. The performance will be evaluated by utilizing a very large data set of “fake” and “live” fingerprint images captured with three different optical scanners. The performance rank will be compiled and published in this site and the best algorithm will win the “Best Fingerprint Liveness Detection Algorithm Award” at ICIAP 2009.
On the Super‐Resolution of Iris Images from Video Streams
Patrick Flynn (Notre Dame) and Arun Ross (WVU)
Current trends in iris recognition deployment expectations include (a) the use of video sensors to acquire video sequences of images, and (b) the need to exploit images acquired under non‐ideal circumstances. Currently available iris matchers perform poorly on iris images that are low in resolution. Since non‐ideal circumstances may preclude camera repositioning to improve resolution, investigation of resolution improvement through multi‐frame integration is a topic of interest. We propose to combine our expertise in image processing, iris imaging, and video analysis to integrate multiple video frames of an iris to form a super‐resolution still image suitable for matching. We will explore two scenarios to facilitate super‐resolution. In the first scenario, a single camera will be used to acquire a video stream of a live iris (single view system); in the second scenario, two cameras, offset by a pre‐determined inter‐axial distance, will be used to acquire multiple video streams of a live iris (multi‐view system).
Unconstrained Face Recognition under Non‐Ideal Conditions
Arun Ross (WVU) and Anil K. Jain (MSU)
The matching performance of face recognition systems has significantly improved over the past decade as assessed by FRVT/FRGC evaluations. However, the fundamental problem of matching face images obtained using different cameras and/or subjected to severe photometric (e.g., illumination) and geometric (e.g., compression) transformations continues to pose a challenge. For example, it is difficult to match a high‐resolution face image obtained under controlled illumination against a low resolution face video hosted at YouTube or a geometrically resized face image posted on the web. In this work, the problem of matching face images whose intrinsic image characteristics are substantially different will be investigated. The goal is to design algorithms that can handle the problem of matching disparate face images of the same individual.
Phase 1 – Participation in the Multi‐Biometric Grand Challenge
Stephanie Schuckers (Clarkson), Natalia Schmid (WVU) and Besma Abidi (UTK)
Fusion of face and iris data at a distance for biometric recognition could be extremely beneficial in places, like airport, port of entry, etc. The Multi‐Biometric Grand Challenge goal is to provide various recognition challenges for face and iris based on still and video imagery. Our previous work includes efforts in face and iris recognition using advanced adaptive algorithmic approaches to account for non‐ideal conditions through preprocessing, segmentation, modeling and utilization of global iris quality. We propose to participate in the Multi‐Biometric Grand Challenge (MBGC). MBGC has three stages. (1) Challenge 1 data was made available in May 2008. Results were presented in Dec 2008 at a workshop. (2) Challenge 2 dataset with results are presented in Spring 2009. (3) The last stage is the Multi‐Biometric Evaluation in Summer 2009. Our approach will be to fuse biometric information from both face and iris extracted over multiple frames. Quality information will be a critical component to select individual frames and to weigh information at the feature/pixel level. Fusion will be considered at the match score level and feature level where PDE‐texton maps or other features can be used to jointly encode to obtain robust representation of face and iris.
Evaluating and Integrating Speech Recognition Software into Agent99 for Real‐Time Deception Detection
Kevin Moffit, Sean Humphries, Jay Nunamaker, Judee Burgoon and Pickard Burns (U of A)
The successes of automated post‐processing of text for linguistic credibility assessment are well‐documented. (Zhou et al., 2004, Moffitt et al., 2008). Yet real‐time processing of interviews and dialogue using these tools has yet to take place. For real‐time processing to occur, words must be transcribed to text as they are spoken. This is accomplished by speech recognition (SR) software. SR software has a history of being inaccurate and difficult to use; however some SR software companies now claim 95% accuracy. The medical industry has now embraced SR software to cut the time in preparing patient reports (Alapetite et al. 2008). Even so, integrating SR into the linguistic credibility process has remained untried and the reliability of SR software as a tool for gathering interview data and dialogue has yet to be evaluated. In this project, we will build a real‐ time speech evaluation system which integrates SR with Agent99, leveraging software previously developed for linguistic credibility assessment.
Handedness in Detecting Deception in Cultural Interviews
Mathew Jensen (Oklahoma), Thomas Meservy (UM) and Judee Burgoon (U of A)
Kinesic analysis has been successfully used to discriminate truth from deception in numerous experimental settings. A curious finding that has repeatedly surfaced is the predictive value of cues related specifically to the left hand. Two alternative explanations will be addressed in the proposed research. One is that differences stem from brain lateralization (each hemisphere has functional specialization) such that the left hand performs different discursive functions than the right. The other is that the effects are due to handedness and observed left‐hand effects pertain only to those with right‐hand dominance. We will use data collected on subject handedness to (a) replicate the hand‐ related effects and (b) determine if they are linked specifically to hand dominance or question type or both.
Looks like Me: Cultural Avatars
Koren Elder, Mark Patton, Aaron Elkins, Carl and Judee Burgoon (U of A)
This study will conduct a lab experiment to empirically test how the gender and ethnicity of avatars can be manipulated to elicit cues that are reliable indicators of truth or deception. The credibility of the avatar and the reactions of the subject will be investigated to determine if there are significant differences based on avatar and subject gender and ethnicity interactions and which avatars best expose deception without creating confounding arousal unrelated to deception. With the increase in use of avatars for kiosks, it is important to understand how the embodiment of an online agent can impact interpersonal communications.