2014 Projects

LivDet 2015: Liveness Detection Competition 2015
Stephanie Schuckers, David Yambay, Joe Skufca

Credibility Assessment On The Go: Evaluating a Tablet‐Based Concealed Information Test
Jim Marquardson, Jeffrey Proudfoot, Mark Grimes, Justin Giboney

Methods for producing cancellable and secure templates for Face Recognition
Venu Govindaraju, Sergey Tulyakov (UB)

Face Liveness Detection
Anil K. Jain

Heterogeneous Face Recognition: Still‐to‐Video Matching in Real‐World Scenarios
Guodong Guo (WVU)

Security Analysis of Fingerprint Matching Systems
Venu Govindaraju, Atri Rudra (UB)

An interviewing chat bot for enhancing voluntary information disclosure
Ryan Schuetzler, Mark Grimes, Justin Giboney, Joey Buckman, Jim Marquardson, Judee Burgoon

Validating the Representativeness of Samples from Sequestered Biometric Data Sets
Bojan Cukic, Mark Culp (WVU), Michael Schuckers (St. Lawrence)

Biometric Aging in Children
Dan Rissacher, Ph.D.; Stephanie Schuckers, Ph.D.; Laura Holsopple; Patty Rissacher, M.D.

Improving User Perceptions of Identification / Authentication Technologies: Empowering Users with Control to Reduce Privacy Concerns
David Wilson, Jeffrey Proudfoot, Ryan Schuetzler, Bradley Dorn, Joe Valacich

Cross‐Device and Cross‐Distance Matching Face Recognition using Cell Phones with Enhanced Camera Capabilities
Thirimachos Bourlai

Multimodal Biometrics for Long‐term Active User Authentication
Venu Govindaraju/Ifeoma Nwogu

Improving Writer Identification/Verification for Mobile and Touch Environments by Modeling Writer Styles
Venu Govindaraju/Srirangaraj Setlur/Sergey Tulyakov

Deception Detection Using Computer Expression Recognition
Judee Burgoon, Steven J. Pentland, Nathan Twyman

Multi‐channel physiological simulator: Extension
Edward Sazonov, Timothy Haskew and Stephanie Schuckers

Covert Detection of Strategic and Nonstrategic Deception Cues with a Bogus Concealed Information Test
Jeffrey Gainer Proudfoot (UA), Judee K. Burgoon (UA), Aaron Elkins (Imperial College, London / (UA), Nathan Twyman (UA)

Fingerprint Identification: A Longitudinal Study
Anil Jain (MSU)

Fusing Biometric and Biographic Information in Identification Systems
Arun Ross (MSU), Anil K. Jain (MSU), Don Adjeroh (WVU), Bojan Cukic (WVU)

Touch DNA: Fusing Latent Fingerprint with DNA for Suspect Identification
Arun Ross (MSU), Anil Jain (MSU), Jeremy Dawson (WVU), Tina Moroose (WVU)

Hardware Accelerator Approach Towards Efficient Biometrics Cryptosystems
Chen Liu and Stephanie Schuckers

 

Summaries:

 

LivDet 2015: Liveness Detection Competition 2015

Stephanie Schuckers, David Yambay, Joe Skufca

Fingerprint recognition systems are vulnerable to artificial spoof fingerprint attacks, like the molds made of silicone, gelatin or Play‐Doh, etc. Suspicious presentation detection systems, such as liveness detection, have been proposed to defeat these kinds of spoof attacks. In 2013, U of Cagliari and Clarkson University hosted the third competition to test software‐based and hardware‐based Fingerprint liveness detection algorithms. This competition was organized according to the following: Part I‐‐Algorithm‐based: Included distribution of a dataset of spoof and live images for training and evaluation of submitted algorithms on a sequestered database and Part II‐‐System‐based: Included submission of a hardware/software system device which returned a liveness score and evaluation based on spoof samples and live subjects. Results were presented at ICB2013. Numerous groups submitted to this competition, with increasing interest from the community. LivDetIII also included the inaugural iris liveness detection competition with LivDet 2013‐ Iris. Iris systems have shown a weakness to attacks by a client obscuring their natural iris pattern with a patterned contact lens or the use of printed iris image. Clarkson University partnered with Warsaw University of Technology and Notre Dame University to create an initial iris liveness detection competition. LivDet 2013‐Iris only included a Part 1: Algorithms. The dataset of spoof and live iris images from this competition has been made available to researchers. We propose to host LivDet 2015. LivDet 2015 will be composed of Part I—Algorithm‐based, with training/testing datasets, and Part II—System‐based, which allow submission of hardware/software systems. Both parts will be for both fingerprint and iris systems. Analysis of performance will enable knowledge of the state‐of –the‐art in the field, as technologies begin to emerge. As additional component, we will develop an experimental plan for a LivDet‐ Voice competition. Speaker recognition systems are known to be vulnerable to play‐back attacks, voice conversion, speech synthesis, or mimicking attacks. Such spoofs are non‐trivial to develop and execute. To support inclusion of a Voice component to future LivDet competitions, we will use a small component of project funding to develop audio‐ spoof methodologies and competition protocols.

 

Credibility Assessment On The Go: Evaluating a Tablet‐Based Concealed Information Test

Jim Marquardson, Jeffrey Proudfoot, Mark Grimes, Justin Giboney

People frequently deceive others by concealing information. The Concealed Information Test (CIT) is a method of conducting credibility assessment interviews. Interviewees are presented with stimuli while physiological and behavioral variations are measured. Innocent people react randomly to the stimuli while persons concealing information have distinct physiological responses when presented with crime‐relevant information. Physiological cues are often measured using the same sensors as the polygraph. We propose building a tablet‐based CIT application. The application would automate interviewing and use the tablet’s microphone, accelerometer, and gyroscope to distinguish levels of arousal that indicate deception.

 

Methods for producing cancellable and secure templates for Face Recognition

Venu Govindaraju, Sergey Tulyakov (UB)

We seek to extend the work done in generation and matching of privacy preserving fingerprint templates to other biometric modalities such as faces. Unlike fingerprints, faces exhibit a high degree of variation arising from pose, illumination, age, facial hair etc. and exact matching of features like minutiae points from fingerprints is not feasible. As such, face recognition has been largely approached using holistic methods such principle component analysis and linear discriminant analysis followed by approximate matching of the projected feature vectors. Given such a scenario it is difficult to use highly secure one way hash functions to protect the templates since matching accuracy would deteriorate severely in the absence of exact matches. Several algorithms have been proposed for protection of face templates but fall short in either providing the security hash functions provide, give out information about the distribution of the true templates or require additional user specific information. We believe that using localized features from sub regions of the face we can achieve exact matching between regions of the template and thus provide the high security of one way hashes without giving out any information about the distribution of the original user data. We propose the detection of meaningful key points in the face region and designing and extracting features suitable for hashing (high security) as well as high matching accuracy. Experiments comparing the performance of several localized features and quantization techniques and a theoretical analysis of the security provided will be performed.

 

Face Liveness Detection

Anil K. Jain

With the growing popularity of face recognition, particularly in unattended (e.g., access control) and consumer applications (e.g., mobile phone unlocking), there is an urgent need to prevent spoof attacks, such as printed photo or replay video attacks. The face unlocking application that comes with Android phones can be easily circumvented by these simple attacks. More sophisticated attacks involve 3D face masks. While several face anti‐spoofing methods have been published, they are not robust. As an example, face liveness methods trained on Idiap database perform poorly on CASIA database. The objective of this research is to leverage multiple face spoof databases to design a (i) robust, (ii) accurate, and (iii) real‐time face spoof detector.

 

Heterogeneous Face Recognition: Still‐to‐Video Matching in Real‐World Scenarios

Guodong Guo (WVU)

Heterogeneous face recognition (HFR) is an emerging topic in Biometrics, for both academic research and real‐world applications. The study of HFR is motivated by the advances in sensor technology development that make it possible to acquire face images from diverse imaging sensors, such as the near infrared (NIR), thermal infrared (IR), and three dimensional (3D) depth cameras, and also motivated by the demand from real applications, e.g., the recent IARPA’s Janus Program requests HFR in its Phase III. In addition to the typical HFR, some other emerging face recognition problems can be categorized into HFR as well. For instance, the matching between makeup and non‐makeup faces [3,4], between faces from digital photos and video frames, or faces of high and low resolutions [2]. Currently commercial biometric systems cannot deal with HFR well [1]. In this project, we will overview the developed methods for various HFR problems, and then focus on a specific HFR problem, called still‐to‐video (S2V) face recognition in the real‐world scenarios (containing a variety of  variations, e.g., quality, resolution, pose,…). This problem has not been well‐studied yet [5].

 

Security Analysis of Fingerprint Matching Systems

Venu Govindaraju, Atri Rudra (UB)

With the increasing popularity of fingerprints to secure data, there is a greater need to secure fingerprint templates. Fingerprint template security has been studied and there are many systems claiming various level of security. These systems are analyzed using a wide variety of techniques with varying rigor and accuracy. Since the techniques used are so different, it is difficult to compare the security of any two schemes, even if the schemes themselves are similar. This project will work towards solving these issues and lead to an accurate way of measuring and comparing the actual template security of such systems. We will start this project by analyzing existing systems with template security. During this phase, we will develop rigorous proofs for the security of these systems if they are lacking. We will then take the proofs developed during this phase to generate ways to prove the security of other systems. Our goal will be to develop a generic way to analyze arbitrary systems based on a list of properties a system should have. For each property, we will provide thorough analysis of its impact on template security. Given these properties and analyses, we will develop a template scheme that can be used to develop systems with already proven security. Throughout, we will have a strong focus on the balance between theory and practice as both are necessary for a fingerprint matching scheme that claims template security.

 

An interviewing chat bot for enhancing voluntary information disclosure

Ryan Schuetzler, Mark Grimes, Justin Giboney, Joey Buckman, Jim Marquardson, Judee Burgoon

As automated agents become more pervasive in information gathering (e.g., in a medical office, or during a fraud investigation), it is important to understand how they compare to human or survey‐based methods of information solicitation. This research will investigate the design of a chat bot for soliciting sensitive information from individuals. We will create a chat bot that provides a dynamic interview using content analysis to create individually‐customized follow‐up questions in order get people to disclose information that they initially withhold.

 

Validating the Representativeness of Samples from Sequestered Biometric Data Sets

Bojan Cukic, Mark Culp (WVU), Michael Schuckers (St. Lawrence)

As the application of biometrics in identity management systems grows in scale, the accuracy of performance prediction requires the use of adequate population / data samples. Such samples can be assembled through careful data collection or through the informed selection from existing biometric data sets. The proposed research will concentrate on the second approach: validating the representativeness of a given test sample for the performance prediction of large, but possibly sequestered, identity management data set. Such test sample could be created from the sequestered data set, or come from outside sources. The representativeness of biometric samples for performance prediction, somewhat surprisingly, has not been directly addressed in research. Because biometric technologies deal with human subjects, collections are relatively expensive and rely upon convenience sampling [1]. But recent literature clearly indicates that the performance of biometric algorithms can be clearly biased by gender, age and even ethnic group participation in the sample [2]. We have exploited this observation in prior research with the goal of minimizing sample size for performance prediction of face recognition through stratified sampling [3]. A proactive method for performance prediction from random sampling using the variability control technique generally known as reliability assessment charts allows biometric performance assessment [4], but it does not guide the selection of adequate test samples – human subjects. Regardless of the approach, if bias is present in the test sample, the statistical projections of system performance will likely offer misleading results and lead to inaccurate conclusions. For this project, we have been offered access to a large biometric repository of identity management data, Data Set A, in three modalities: face, fingerprints and iris. We will be work together with the custodians of the sequestered US Government identity management data set, the Data Set B. For each modality, starting with face, we plan to identify the biometric quality and population factors that drive the distribution of match scores in data sets A and B. We will achieve that by using a minimal sample size from sequestered Data Set B. We will develop a statistical definition of biometric data set “representativeness” with respect to the ability to predict performance within the specified error bounds.

 

Biometric Aging in Children

Dan Rissacher, Ph.D.; Stephanie Schuckers, Ph.D.; Laura Holsopple; Patty Rissacher, M.D.

In this project we will collect biometrics (fingerprint, footprint, iris, face, hand vein, voice) in children age 0‐18 years over multiple visits. The data will be analyzed towards multiple goals: 1) Earliest age of modality’s viability 2) Variability of modality with age 3) Developing models to account for age‐variations 4) Development of age measurement metrics. The search for age‐determination metrics will include data already obtained with a modality (e.g. iris features, vein Measurements) and data that could easily be simultaneously collected (e.g. pupil or eyeball size).

 

Improving User Perceptions of Identification / Authentication Technologies: Empowering Users with Control to Reduce Privacy Concerns

David Wilson, Jeffrey Proudfoot, Ryan Schuetzler, Bradley Dorn, Joe Valacich

New technologies designed to improve identification and authentication accuracies are continuously developed and adopted for use by government agencies conducting security operations. Interactions with these systems are often mandatory, raising privacy concerns about the data that is collected. These technologies are often designed and implemented with little emphasis on how user perceptions of these technologies may influence their performance. We propose that a new area of research should be emphasized, namely, how to reduce the anxiety/concern of individuals regarding the disclosed information. This project will examine one such strategy: measuring the effect of restoring individuals’ perceived control over the disclosed information. The psychological stress literature (Averill, 1973) indicates that two key forms of control—behavioral and cognitive—predict one’s perceived control, and these will be evaluated for their applicability and utility in the experimental design.

 

Cross‐Device and Cross‐Distance Matching Face Recognition using Cell Phones with Enhanced Camera Capabilities

Thirimachos Bourlai

Standard face recognition (FR) systems compare new facial images or probes with gallery pictures to establish identity. They typically perform well using good quality, visible band cameras, when lighting is good, and subjects are cooperative and close to the camera. However, many law enforcement and military applications deal with mixed FR scenarios that involve matching probe face images captured by difference portable devices (cell phones, tablets etc.)and at variable distances against good quality face images (e.g. mug shots) acquired using high definition camera sensors (e.g. DSLR cameras). Although most portable devices (cell phones, tablets etc.) operate in the visible band, the problem of cross‐scenario matching, e.g. matching face images captured by difference camera sensors and devices and at different conditions (indoor, outdoors, variable distances) is open area for research. This is also known as the heterogeneous FR problem and a potential solution will enable interoperability by adding a device‐independent matching component. While there are baseline studies reported in the literature, where face images captured from different devices are matched against visible good quality face images [1‐6], to our knowledge, there is no study reported where all face datasets available are simultaneously collected (i) using variable portable devices that have the capability (sensors) to acquire mid‐range (>10 meters) face images (e.g. Samsung S4 Zoom with 10x optical zoom, Nokia 1020 with a 41MP sensor etc.) and (ii) at different standoff distances. In this work, we will investigate the benefits of cross‐device and cross‐distance mobile‐based face recognition. Our proposed work will investigate answering the following questions: (1) Can we efficiently match mid‐range outdoor (>10m) face images captured by cell phone/tablet devices to their good quality, indoor, visible counterparts? (2) Can we repeat (1) when standoff distances vary (e.g. 10 meters outdoors vs. 2 meters indoors)? (3) What is the maximum operational distance of the aforementioned cell devices with long range camera capabilities or, in other words, what is the maximum stand‐off distance where FR rank‐1 scores are still acceptable (e.g. > 98% on a target of 100 people)?

 

Multimodal Biometrics for Long‐term Active User Authentication

Venu Govindaraju/Ifeoma Nwogu

Computer systems are extremely vulnerable to “masquerading attacks” where an unauthorized human or software impersonates a user on a computer system or network. Standard methods to authenticate a computer/network user typically occur once at the initial log‐in. These involve user proxies, especially passwords and smart cards such as common access cards (CACs) and service ID cards, all of which suffer from a variety of vulnerabilities. By actively and continually authenticating a user, intruders can be identified before they hijack the user session of an authorized individual who may have momentarily stepped away from his/her console. We therefore propose to investigate multimodal biometric‐driven authentication processes. The biometrics methods we will investigate include combining face recognition, recognition based on keystroke dynamics and mouse movements of the user at the terminal, and lastly, the intrinsic, language‐usage attributes of the user.

 

Improving Writer Identification/Verification for Mobile and Touch Environments by Modeling Writer Styles

Venu Govindaraju/Srirangaraj Setlur/Sergey Tulyakov

With the increasing use of touch‐based handwritten text input for smartphones, tablets and other touch screen surfaces, online writer identification and recognition are becoming critical needs for a variety of applications. We believe writing style represents a shared component of individual handwriting. Thus a person’s handwriting can be a priori conceptualized as an individual‐specific combination (determined by a person’s physiology ‐ genetic factors) of a shared pool of writing styles (often determined culturally ‐ memetic factors). We explicitly model this theoretical framework using a Latent Dirichlet Allocation based approach for the task of writer identification [3, 4]. By using LDA we can efficiently model a large superset of writers by using a significantly smaller subset of writing styles. As LDA is a generative model, writers who are not in the original corpus may also be identified from the existing learned distribution of writing styles. The LDA‐based model overcomes the limitations of scalability and extensibility of other approaches [1, 2] which is critical in the online domain. We propose to adapt and test this model on handwritten input on mobile platforms.

 

Deception Detection Using Computer Expression Recognition

Judee Burgoon, Steven J. Pentland, Nathan Twyman

Facial movement and expressions can be a measure of emotion and cognitive strain and a possible sign of deception in screening and interviewing scenarios. Most deception research utilizing facial expressions has compared groups of deceptive/guilty innocent/truthful subjects. The proposed project will conduct analyses within subjects, looking for deviations from their own truthful baseline when responding deceptively. As a further innovation, the investigation will test whether facial movements function as telltale indicators during a concealed information test. If so, they can be added to the arsenal of nonverbal cues of criminal knowledge and malintent.

 

Multi‐channel physiological simulator: Extension

Edward Sazonov, Timothy Haskew and Stephanie Schuckers

The goal of the original project was to create a physiological simulator for testing of polygraph equipment. The designed simulator is capable of simultaneous and time‐synchronous playback of recorded physiological signals such as respiration (2 independent channels), electrodermal activity (1 channel), and cardiovascular activity as observed by a blood pressure cuff (1 channel). The goal of the current extension is to build 3 additional simulators and tests them for repeatability of results.

 

Covert Detection of Strategic and Nonstrategic Deception Cues with a Bogus Concealed Information Test

Jeffrey Gainer Proudfoot (UA), Judee K. Burgoon (UA), Aaron Elkins (Imperial College, London / (UA), Nathan Twyman (UA)

Interpersonal Deception Theory (IDT) contends that “Compared with truth tellers, deceivers (a) engage in greater strategic activity designed to manage information, behavior, and image and (b) display more nonstrategic arousal cues, negative and dampened affect, noninvolvement, and performance decrements.” A recent Concealed Information Test (CIT) study using visual stimuli found that those concealing information fixated on the center of the screen and exhibited longer responses latencies compared with the control group of truth tellers throughout the test, regardless of whether target or nontarget stimuli were presented. This unexpected finding suggests that a new interviewing technique (adapted from the CIT) may be feasible for use in identifying strategic and nonstrategic indicators of deception. Because this adapted technique can use bogus target items, it may be more feasible for adoption and use in the field.

 

Fingerprint Identification: A Longitudinal Study

Anil Jain (MSU)

Fingerprint identification is based on two fundamental premises: (i) persistence and (ii) uniqueness of the ridge pattern. Although a number of statistical models have been proposed to demonstrate fingerprint uniqueness (individuality), the persistence of fingerprints has been generally accepted based on anecdotal evidence. In this study, our objective is to (i) formally study the impact of elapsed time (time span) between two fingerprint impressions on genuine match scores, (ii) model the fingerprint longitudinal data with multilevel statistical models, (iii) identify additional predictive variables of genuine match scores (e.g., subject’s age, gender, race, changes in image acquisition method (inked or livescan), etc.), and (iv) quantify the impact of these factors on genuine match scores. The null hypothesis we want to test can be stated as follows: Fingerprint identification accuracy does not depend on the time span between the query and reference print.

 

Fusing Biometric and Biographic Information in Identification Systems

Arun Ross (MSU), Anil K. Jain (MSU), Don Adjeroh (WVU), Bojan Cukic (WVU)

Multibiometric systems consolidate evidence provided by multiple sources to establish the identity of an individual [1]. Typically, these sources correspond to the biometric traits of an individual such as face, iris, fingerprint and palmprint. We propose to investigate the problem of combining non‐biometric, biographic data (such as name, age, gender, ethnicity, nationality, etc.) with biometric information in order to render better decisions about the identity of individuals. Such a research effort is warranted due to the potential use of mixed data (i.e., biometric and nonbiometric) in large‐scale identity management systems such as US‐VISIT (now OBIM), TWIC and E‐VERIFY. Further, social media sites such as facebook, linked‐in, and twitter have both biometric (viz., face images) and biographic details of an individual. However, the role of biographic data in establishing the identity of an individual in large‐scale systems has hitherto not been studied. This project will undertake a systematic study to establish the utility of combining biographic data with biometric information to establish identity.

 

Touch DNA: Fusing Latent Fingerprint with DNA for Suspect Identification

Arun Ross (MSU), Anil Jain (MSU), Jeremy Dawson (WVU), Tina Moroose (WVU)

Touch DNA refers to the genetic sample left behind by a suspect in a crime scene. Typically, this may correspond to a few skin cells adhering to a surface touched by the suspect. For example, a criminal may touch an object leaving behind a latent fingerprint as well as skin cells or sweat pertaining to that fingerprint. Processing and analyzing the DNA extracted from these cells in order to establish human identity is challenging and prone to error due to (a) the limited number of cells that are available (and, consequently, low‐yielding DNA), and (b) the inevitable sample contamination that occurs due to sample exposure and the extraction and storage processes. In this project, we seek to develop methods for performing suspect identification that combines touch DNA with latent fingerprints obtained from a crime scene. In this scenario, both the DNA and the fingerprint are expected to be of low quality, i.e., they are inherently corrupted and, therefore, less reliable when used independently for suspect identification. The goals of this work are to: (a) develop an understanding of how low‐copy DNA processing methods impact signal quality (b) develop advanced signal processing and matching routines for extracting identifiable data from these two potentially corrupted modalities (i.e., fingerprint and DNA); (c) implement a fusion framework that combines these two modalities in order to identify a suspect; and (d) execute a performance evaluation framework for gauging the efficacy of the proposed signal processing and fusion schemes.

 

Hardware Accelerator Approach Towards Efficient Biometrics Cryptosystems

Chen Liu and Stephanie Schuckers

Biometric cryptosystems combine biometrics and cryptography at a level that allows biometric matching to effectively take place in the cryptography domain. This is practically attractive to cloud computing scenario when strong emphasize is put on the integrity of the biometric data (templates). Even though biometric cryptosystems can provide enhanced level of security, the need to access and process the data at ideally real‐time for large datasets poses great challenge. We propose to design specialized hardware accelerators to meet the computation demand associated in design and implement such systems. We propose to design customized hardware accelerator in the form of IP (intellectual property) correspondingly. The work will be developed using hardware description language and implemented on FPGA (Field Programmable Gate Arrays) platform. We will validate our proposed approach by comparing the difference in terms of performance, power consumption and energy efficiency against the traditional approaches without hardware accelerators. This research project will facilitate the efforts to make biometric cryptosystems more widely deployed in security identification for both industry and government, targeting at the future cloud‐computing platform.

.