Impact of Cosmetics on the Matching Performance and Security of Face Recognition
Antitza Dantcheva, Arun Ross, Guodong Guo (WVU)
Utilizing Low‐Cost, Portable Depth Cameras for Face Recognition
Guodong Guo, Arun Ross, Bojan Cukic (WVU)
Latent Fingerprint Quality Assessment
Anil K. Jain (MSU)
Automatic Segmentation of Latent Fingerprints
Anil K. Jain (MSU)
A Pre‐Processing Methodology for Handling Passport Facial Photo
T. Bourlai (WVU)
LivDet III: Liveness Detection Competition 2013
Stephanie Schuckers, David Yambay (Clarkson)
Experimental Analysis of Automated Latent Fingerprint Systems
Stephanie Schuckers, Megan Rolewicz (Clarkson)
Speaker Recognition Techniques for the NIST 2012 Speaker Recognition Evaluation
Jeremiah Remus (Clarkson)
Resilience of Deception Detection Sensors
Nathan Twyman, Ryan Schuetzer, Jeffrey Gainer Proudfoot, Aaron Elkins, Judee Burgoon (UA)
Testing the Use of Non‐Invasive Sensors to Conduct the Concealed Information Test: Comparing the Accuracy of Oculometric, Vocalic, and Electrodermal Cues in Detecting Familiarity with Persons of Interest
Jeffrey Gainer Proudfoot, Judee Burgoon, Nathan Twyman, Aaron Elkins, Jay F. Nunamaker (UA)
Integrating Physiological Measurements with Avatars for Effective Deception Detection
Thirimachos Bourlai, Arun Ross (WVU)
Stratified vs. Convenience Sampling for the Statistical Design of Biometric Collections
Mark Culp, Thirimachos Bourlai, Bojan Cukic (WVU)
Fingerprint Recognition Using Commercial Digital Cameras
Arun Ross, Jeremy Dawson,Thirimachos Bourlai (WVU)
Modeling IrisCodes: New Approaches to Iris Individuality and Classifications
Arun Ross and Mark Culp (WVU)
Simultaneous Recognition of Humans and their Actions
Guodong Guo, Arun Ross, Bojan Cukic (WVU)
Generating a 3D Face Texture Model from Independent 2D Face Images
Arun Ross (WVU) and Anil Jain (MSU)
Understanding the Science Behind Biometrics: A Systematic Review
Arun Ross and Bojan Cukic (WVU)
Image Enhancement for Iris Recognition from Incomplete and Corrupted Measurements
Aaron Luttman and Stephanie Schuckers (Clarkson)
Stand‐Off Speaker Recognition: Effects of Recording Distance on Audio Quality an System Performance for Iris Recognition from Incomplete and Corrupted Measurements
Jeremiah Remus and Stephanie Schuckers (Clarkson)
Identifying Behavioral Indicators of Cognitive Load Associated with Deception and Interview Questions
Judee Burgoon and Jeffrey Gainer Proudfoot (UA)
Validating the SPLICE Implementation of Automated Textual Analysis
Kevin Moffitt, Justin Giboney, Judee Burgoon, Emma Ehrhardt, Jay Nunamaker (UA)
Summaries:
Impact of Cosmetics on the Matching Performance and Security of Face Recognition
Antitza Dantcheva, Arun Ross, Guodong Guo (WVU)
Motivated by the need for deploying highly reliable face recognition systems in both commercial and security applications, we seek to initiate research that studies the impact of cosmetic alterations on face recognition. At the same time, we aim to develop novel algorithmic solutions that allow for robust face recognition in the presence of such cosmetic alterations which, when properly used, may result in concealing a person’s identity. Recent work has focused on plastic surgery [1, 2], and specifically on how it can impact the reliability of facial recognition. However, such surgical alterations are generally costly and permanent. On the other hand, the non‐ permanent cosmetic alterations tend to be simple, cost efficient, non‐permanent and socially acceptable; at the same time they have the potential to radically change appearance. Specifically such alterations can (a) alter the perceived facial shape by accentuating contouring, (b) alter the perceived nose shape and size by contouring techniques, (c) enhance or reduce the perceived size of the mouth, (d) alter the appearance and contrast of the mouth by adding color, (e) alter the perceived form and color of eyebrows, (f) alter the perceived shape, size and contrast of the eyes, (g) conceal dark circles underneath the eyes, and (h) alter the perceived skin quality and color. In addition to the aforementioned effects, make‐up can also be used to successfully camouflage wrinkles, birth moles, scars and tattoos. All the above suggest that cosmetic make‐up techniques can greatly impact automated face recognition methods. In this work we will investigate the impact of cosmetics on commercial face recognition systems. Furthermore we will develop methods to (a) detect the presence of cosmetics on a human face image, and (b) perform face recognition in the presence of cosmetic alterations.
Utilizing Low‐Cost, Portable Depth Cameras for Face Recognition
Guodong Guo, Arun Ross, Bojan Cukic (WVU)
Human identification via faces is important in security and law enforcement applications. With advancements in sensor technologies, a number of new sensors are available for face image acquisition. In this project, we will explore the recently developed depth cameras for face recognition. Unlike the expensive, time‐consuming laser range scanners, or the fragile stereo vision systems that suffer from the inability to match homogeneous, non‐texture regions, depth cameras are low‐cost, real‐time, and portable. There are two main categories of depth cameras: one based on the time of‐flight principle and the other based on light coding. While the development of these cameras is still ongoing, a few commercial products are available [1]. Our goal is to determine if these cameras can be successfully used for face recognition. Furthermore, we will study the correlation between visible light face images and depth images. In essence, we wish to address the question: Is it possible to utilize depth cameras for face authentication without reenrolling all the subjects?
Latent Fingerprint Quality Assessment
Anil K. Jain (MSU)
Latent fingerprints found at crime scenes have long history as forensic evidence to identify suspects and convict them in courts of law. Latent fingerprint identification procedure commonly follows the ACE‐V protocol: Analysis, Comparison, Evaluation, and Verification. Due to poor quality of latents, it is inevitable for latent examiners to assess the latents to determine whether sufficient ridge information is present in the image and mark all the available features on the latents in the analysis phase. Towards achieving “Lights‐Out” identification mode for latents, we propose an automatic latent quality assessment algorithm to divide latent fingerprint images into several categories based on their quality level. This will help determine the degree of human intervention needed for each category in latent identification. The objectives of this research include (i) developing an algorithm to automatically estimate quality of input latent fingerprint images, (ii) demonstrating the correlation between latent quality and matching performance, and (iii) determining the degree of human intervention needed to handle the latents according to the quality level.
Automatic Segmentation of Latent Fingerprints
Anil K. Jain (MSU)
Latent fingerprints (latents) are one of the most important sources of forensic evidence in criminal prosecution. Latents are typically partial prints with small area, poor quality and complex background noise. As a result manual intervention is needed to localize or segment latent prints. Therefore, there is an urgent need to develop an automatic latent segmentation method as an initial step towards “lights out” latent identification. The objective of this research is to (i) develop an algorithm to automatically segment latent fingerprints, (ii) demonstrate an improvement in matching accuracy as a result of this segmentation, and (iii) report confidence levels of segmentation results to indicate if manual intervention is necessary or not.
A Pre‐Processing Methodology for Handling Passport Facial Photo
T. Bourlai (WVU)
In many security applications, the problem of matching facial images that are severely degraded remains to be a challenge. Typical sources of image degradation include low illumination conditions, image compression, out‐of‐ focus acquisition etc. Another type of degradation that received very little attention in the face recognition literature is the presence of security watermarks on documents (e.g. passports). Although preliminary work in the area has mitigated certain challenges (removal of noise present on documents) [1,2] the image restoration part requires more attention to further overcome missing information from face images due to image de‐noising. In this work we examine the effects of a preprocessing methodology that mitigates the effects of security watermarks on passport facial photos in order to improve image quality and FR overall. The types of images that will be investigated are face images from passport photos. The proposed project will focus on answering the following questions: (1) How do original passport face photos affect recognition performance? (2) Which pre‐processing algorithms affect recognition performance the most? (3) What are the optimal conditions that FR is feasible under different levels of pre‐processing using our novel algorithm?
LivDet III: Liveness Detection Competition 2013
Stephanie Schuckers, David Yambay (Clarkson)
Fingerprint recognition systems are vulnerable to artificial spoof fingerprint attacks, like the molds made of silicone, gelatin or Play‐Doh, etc. Suspicious presentation detection systems such as liveness detection have been proposed to defeat these kinds of spoof attacks. In 2011, U of Cagliari and Clarkson University hosted the second competition to test software‐based and hardware‐based liveness detection algorithms. This competition was organized according to the following: Part I—Algorithm‐based: Included distribution of a dataset of spoof and live images for training and evaluation of submitted algorithms on a sequestered database and Part II—System‐ based: Included submission of a hardware/software system device which returned a liveness score and evaluation based on spoof samples and live subjects. Results were presented at BCC 2011. Four groups submitted (two universities, two companies). We propose to host LivDet III. LivDET III will be composed of Part I—Algorithm‐based with training/testing datasets and of Part II— System‐based which allow submission of hardware/software systems. Both parts will be for both fingerprint and iris systems. In addition, we propose to lead an international collaboration by allowing organizations to provide datasets of spoof and live images for Part I and by working with multiple international testing organizations to provide a testing setup of submitted hardware systems for Part II. Analysis of performance will enable knowledge of the state‐of –the‐art in the field, as technologies begin to emerge.
Experimental Analysis of Automated Latent Fingerprint Systems
Stephanie Schuckers, Megan Rolewicz (Clarkson)
Latent fingerprints can be a key piece of evidence in order to isolate suspects in a crime and/or build a case against an individual. Latent fingerprint examination is typically a labor intensive process performed by an expert extensively trained in fingerprint examination. Recently automated systems which provide for on‐site latent imaging and latent fingerprint matching against a reference database have become available. Few studies have been performed which considers the efficacy of mobile latent fingerprint systems. Recently, Stockton Police Department has started using a mobile biometric device to capture latent fingerprints at crime scenes. In a pilot study, they collected 144 (out of 200) latent fingerprints using both the automated device and manual lab methods. The result was that with the device there were 28 confirmed hits as opposed to 37 hits using the lab method. The biometric device however confirmed these 28 hits in approximately 2‐5 minutes as opposed to 20‐40 hours or days as would be true in a laboratory setting. The purpose of this study will be analyze factors which contribute to the difference in performance between the automated mobile system and manual examination through a complementary field and laboratory study.
Speaker Recognition Techniques for the NIST 2012 Speaker Recognition Evaluation
Jeremiah Remus (Clarkson)
The NIST 2012 Speaker Recognition Evaluation (SRE) is the latest in the ongoing series of speaker recognition evaluations that seek to support the development and advancement of speaker recognition technology. The NIST SRE is wellestablished and draws participants from many prestigious institutions active in speech research. Entry in the NIST 2012 SRE provides an opportunity for CITeR to participate in the exchange of state‐of‐the‐art ideas and techniques for speaker recognition.
Resilience of Deception Detection Sensors
Nathan Twyman, Ryan Schuetzer, Jeffrey Gainer Proudfoot, Aaron Elkins, Judee Burgoon (UA)
The primary objective of this study is to examine the limitations of certain psychophysiological and behavioral indicators of veracity in a rapid‐screening environment. Rapid screening examinees can use countermeasures, or methods designed to “beat” the test. Anecdotal evidence suggests that using multiple heterogeneous sensors will significantly decrease countermeasure effectiveness; we aim to test this hypothesis directly in a rapid screening context.
Testing the Use of Non‐Invasive Sensors to Conduct the Concealed Information Test: Comparing the Accuracy of Oculometric, Vocalic, and Electrodermal Cues in Detecting Familiarity with Persons of Interest
Jeffrey Gainer Proudfoot, Judee Burgoon, Nathan Twyman, Aaron Elkins, Jay F. Nunamaker (UA)
Oculometric technology (e.g., eye tracking) continues to be a valuable tool utilized in a variety of research disciplines. Extant research suggests that the duration of eye gaze fixation points can be used to identify familiarity (in a facerecognition context). Schwedes and Wentura (2011) found that subjects fixated on familiar faces longer than unfamiliar faces when presented with six images simultaneously. The purpose of this research is to use an adapted methodology to compare the classification accuracy of an oculometric‐based CIT to the standardized CIT approach (using a polygraph device to measure electrodermal responses (EDRs)). An experiment will be conducted in the context of a security screening checkpoint, and will thus be more representative of a high‐stakes environment. This research will provide valuable insight on the reliability and feasibility of deploying eye tracking systems to security environments to identify individuals possessing knowledge of criminals, terrorists, and others possessing mal‐intent.
Integrating Physiological Measurements with Avatars for Effective Deception Detection
Thirimachos Bourlai, Arun Ross (WVU)
Recent research in deception detection has focused on the set of Avatars, where an animated computer‐generated talking head (an embodied agent) interviews a subject by asking a series of questions. The Avatar then records the ensuing verbal responses in an audio file for future analysis. The use of an Avatar can potentially mitigate issues related to biases (e.g., cultural, economical, personality related etc.) encountered when a human agent interviews a subject. In this work, we will extend the capabilities of an existing Avatar system, by designing a method to capture the physiological state of the subject (via EDA, PPG and thermal sensors) during the course of the interview. The goal of this exercise is to synchronize the recorded audio responses to the Avatar’s generated questions with physiological measurements (e.g., facial thermal imaging recording) of the subject thereby facilitating a broader range of analysis leading to effective deception detection. For example, the Avatar system could be trained to repeat a question if the physiological measurements suggest an anomaly during the course of the subject’s verbal response. Indeed, by integrating physiological measurements with verbal responses, the Avatar can automatically learn to dynamically customize the series of questions, thereby enhancing the capability for automated deception detection.
Stratified vs. Convenience Sampling for the Statistical Design of Biometric Collections
Mark Culp, Thirimachos Bourlai, Bojan Cukic (WVU)
One of the critical steps underlying the progress in biometric research is the collection of data, i.e., single or multiple modalities, such as face, iris, fingerprints etc. Collection usually follows the examination of operational needs leading towards the design of scenarios, and the IRB approval process. There is no theory that would help those collecting biometric data determine how many subjects need to be recruited. Typically, an arbitrary number that meets the project’s financial constraints is agreed upon. The recruitment of subjects (e.g. data collected in [1, 3]) is based on local advertising and willingness of individuals to participate. In statistics, this approach to recruitment is called convenience sampling. It is known that convenience leads to selection bias, which hinders the statistical analysis and may mislead the results and the conclusions resulting from the study. Hence, the accuracy and risks associated with recognition performance estimated in ensuing biometric studies can be compromised. Stratified random sampling (SRS) is a technique designed to reduce selection bias, thus lowering the study’s risks and offering to improve the validity of study’s conclusions. In addition, when the SRS is performed properly, the experimenter can estimate the sample size necessary prior to sampling, which is practically useful and may reduce the cost of collection. The proposed projected addresses the following questions: (1) How can a biometrics researcher use existing “large” data sets to generate stratified samples? (2) When the SRS is used in the biometric studies, what practical benefits results from minimizing selection bias? (3) Can we offer a cost effective strategy for using SRS sampling in future studies? To answer these questions, based on the analysis of anonymized participant data from recent large scale collections at WVU, we will compare two samples (stratified and anonymized participant data from recent large scale collections at WVU, we will compare two samples (stratified and convenience) and apply the lessons learned to the upcoming IR‐thermal face recognition (FR) data collections, funded by CITeR [2}.
Fingerprint Recognition Using Commercial Digital Cameras
Arun Ross, Jeremy Dawson,Thirimachos Bourlai (WVU)
Traditionally livescan fingerprint images have been acquired sing contact methods where the fingertip of the subject explicitly touches a platen. However, in recent year, contactless methods for fingerprint acquisition have been developed based on the principles of 3D optical imaging (e.g., GE and FlashScan3D), spatial phase imaging (e.g., PhotonX), and ultrasound imaging (e.g., Optel). The main drawback of these methods is that they require the finger to be very close to the sensor for successful acquisition. Recent advancements in COTS imaging hardware make it possible to observe fingerprints in high‐resolution hand images captured at distances of up to 2 meters. In this work, we will investigate the possibility of performing fingerprint recognition at a distance of up to 2 meters based on images acquired using Commercial Off The Shelf (COTS) SLR and/or cell phones cameras. Further, we will develop image processing and feature extraction algorithms for (a) matching fingerprint images acquired using digital cameras against each other; and (b) matching fingerprints acquired using digital cameras against livescan fingerprints obtained from traditional sensors (e.g., Crossmatch Guardian).
Modeling IrisCodes: New Approaches to Iris Individuality and Classifications
Arun Ross and Mark Culp (WVU)
Iris recognition systems aim to obtain texture information from iris images. To facilitate this, a Gabor filter is convolved with a normalized rectangular iris image and the ensuing phasor response is quantized into a string of ‘0’’s and ‘1’s. This string is incorporated into a binary matrix referred to as an IrisCode. Surprisingly, despite its widespread use in operational systems, very few studies have attempted to understand and directly model the IrisCodes. In this work, we approach the problem by computing Generating Functions for IrisCodes. The main idea is to generate the underlying IrisCode of ‘0’s and ‘1’s by controlling a few parameters. The parameters will guide how we would expect each IrisCode to be generated, which would in turn give us deeper insight into the distribution of the underlying IrisCodes. Such an approach will provide significant benefits to the iris recognition community: (a) Synthetic IrisCodes can be generated for predicting the performance of large‐scale biometric systems such as UIDAI; (b) Novel methods can be designed for classifying and clustering IrisCodes based on these Generating Functions; (c) Models for understanding the individuality of the iris can be developed; (e) Application‐specific IrisCodes can be designed based on template size requirements; and (f) Generating Functions can provide new insights into iris‐based cryptosystems.
Simultaneous Recognition of Humans and their Actions
Guodong Guo, Arun Ross, Bojan Cukic (WVU)
Human recognition is important in many law enforcement applications. Significant progress has been made in human identification and verification in the past two decades; however, human identification is still a challenging problem, especially when operating in an unconstrained environment with non‐cooperative users. In many security and surveillance scenarios, individuals are observed to be performing various actions, rather than standing still. So identifying humans in action is a typical scenario in non‐cooperative biometrics. In this work, we will design algorithms that can not only identify individuals but also determine their corresponding actions in a video. An advantage of this research is that the biometric system can process a query involving both identity and action (e.g., retrieving all videos in a database containing the waving action of a particular individual), rather than the identity‐only based query supported by traditional systems.
Generating a 3D Face Texture Model from Independent 2D Face Images
Arun Ross (WVU) and Anil Jain (MSU)
In several law enforcement and military biometric applications, multiple 2D face images of a subject are acquired during enrollment and/or authentication (e.g., mug shot face images of a subject in a booking station). Typically, in these scenarios, both the frontal‐ and side‐profile images of the face are obtained. However, since the frontal and side‐profile images are independently acquired, the “registration” or “alignment” between them may not be known. In this project, we will design methods for registering and combining the frontal and side profile face images of a subject in order to generate a composite 3D shape and texture of the subject’s face. The 3D model will encompass both the geometrical structure and visual appearance of the face. While previous studies in the literature have explored the use of stereovision techniques or expensive 3D face scanners for generating 3D or 2.5D face models, the proposed work will further advance the state‐of‐the‐art by designing the following: (a) methods for generating 3D textures from multiple independent 2D face images; (b) GUI for visualizing the 3D texture by allowing the operator to articulate the model; (c) face aging models that exploit both 3D and 2D information; and (d) efficient algorithms for matching 2D images against 3D texture models in real‐time. This research will result in novel face recognition algorithms that are invariant to pose and possibly expression changes.
Understanding the Science Behind Biometrics: A Systematic Review
Arun Ross and Bojan Cukic (WVU)
The past two decades has seen a substantial increase in biometrics activity accompanied by the deployment of several biometric systems in diverse applications ranging from laptop access to border control systems. The increased use of biometric solutions in DoD military applications, and the inclusion of biometric evidence in military and criminal courts, necessitates a careful examination of the scientific basis for biometric recognition. In particular, there is an urgent need to systematically review the scientific literature to determine if some of the common assumptions made about biometric traits with respect to criteria such as universality, uniqueness, permanence, measurability, performance, acceptability and circumvention, is borne out in the academic literature. Thus, the purpose of this study is to (a) survey published academic papers that address the basic science behind various biometric modalities; (b) identify gaps in existing research and the implications on operational system risks; and (c) provide recommendations for further research and deployment.
Image Enhancement for Iris Recognition from Incomplete and Corrupted Measurements
Aaron Luttman and Stephanie Schuckers (Clarkson)
Human identification when the subject is unaware of being analyzed is a problem of growing importance. Capturing and analyzing iris data for recognition in such scenarios brings a new set of technical problems that are not present in controlled imaging environments. When iris images are captured in an uncooperative environment, the image data is often incomplete – in the sense that parts of the iris may be occluded or simply outside the image frame – as well as corrupted – by standard noise and specular highlighting, often across the entire eye. In order to perform iris recognition, both problems must ameliorated simultaneously. Given a sequence of incomplete images, we will synthesize a composite image using image mosaic techniques. The mosaiced image may still be incomplete, and the missing data will be filled in via variational inpainting, based on the Navier‐Stokes equations and very recent work on inpainting with Bayesian estimation. We propose to integrate state of the art image mosaicing with customized, PDE‐based image inpainting into a unified system for successful personal identification from composite iris images generated from sequences of partial irises.
Stand‐Off Speaker Recognition: Effects of Recording Distance on Audio Quality and System Performance for Iris Recognition from Incomplete and Corrupted Measurements
Jeremiah Remus and Stephanie Schuckers (Clarkson)
There has been significant success in the development of techniques to use speech as a biometric, and great potential for fusion with other biometrics (e.g. iris, face, physiological) currently under investigation within CITeR. With increasing interest in the collection of biometrics at a distance, it would be beneficial to have a clearer understanding of the sensitivity of speaker recognition systems to the degradation of audio quality when recording at a distance. It is reasonable to expect that distance from the recording device will degrade the signal‐to‐noise ratio; however most investigations of audio quality and its effect on speaker identification performance have focused on the channel quality (e.g. telephone lines or mobile handsets). While there have been significant efforts within the speaker recognition research community to develop methods for handling session‐to‐session speaker variability or variations introduced by different microphones, it is unclear how well these solutions can address the problem of speech recorded at a distance. Therefore, we propose to investigate whether current feature decomposition techniques, used to manage inter‐session and cross‐channel variability, are capable of reducing variability that results from stand‐off recording of the speech. We will also assess the ability of published audio quality stands (e.g. ANSI, ITU), as well as subjective assessments, to describe the condition of an audio recording and its ability to be used in a speaker recognition system.
Identifying Behavioral Indicators of Cognitive Load Associated with Deception and Interview Questions
Judee Burgoon and Jeffrey Gainer Proudfoot (UA)
One theorized basis for indicators of deception is cognitive load. Deceivers are proposed to experience increased cognitive difficulty while manufacturing the producing lies. However, it is possible that some questions actually are more cognitively taxing for truth tellers that deceivers and may therefore lead to false alarms and false negatives when detecting deceit from behavioral indicators. The current study will conduct behavioral analyses on video‐recorded interviews from 3 cheating experiences involving unsanctioned deception with high stakes (potential honor code violations). Results will provide more detailed indicators of cognitive taxation associated with deception and with specific questions.
Validating the SPLICE Implementation of Automated Textual Analysis
Kevin Moffitt, Justin Giboney, Judee Burgoon, Emma Ehrhardt, Jay Nunamaker (UA)
Structured Programming for Linguistic Cue Extraction (SPLICE) is an easy‐to‐use web‐based research tool for automated linguistic analysis. SPICE’s major function is to perform automated textual analysis of ordinary text files and return values for linguistic cues such as the total number of words, dominance, negativity, average ford length, and average sentence length, However, many of these cues, including dominance and submissiveness, still lack theory driven validating and even more cues have not been subjected to reliability tests. Given the potential SPICE has a tool for researchers and end users who want to expand their measures beyond the psychosocial dictionaries found in software like LIWC, we propose to significantly strengthen subjecting the output to reliability tests. In addition, we will provide SPICE users with the capability of uploading custom psychosocial dictionaries. The end result of this project will be theory driven, reliable tool for the analysis of texts that supports custom dictionaries.