2015 Projects

Longitudinal Study on Face Recognition
Anil K. Jain (MSU)

Automated Face Image Quality Assessment Using a Learning-based Approach
Guodong Guo (WVU)

Partial Face Matching Across the Infrared Band
Thirimachos Bourlai  (WVU), Xin Li (WVU), and Natalia Schmid (WVU) and Stephanie  Schuckers  (Clarkson)

Noncooperative Human Identification by Face and Clothing
Gianfranco Doretto (WVU)

A Cloud-based Biometric Service Model for Iris and Ocular Recognition Using a Smartphone
Arun Ross  (MSU),  Adjeroh  (WVU) and Matt Valenti  (WVU)

Key Frame Analysis for Face in Video Recognition
Chen Liu, Dan Rissacher (Clarkson)

Biometric Aging in Children
Daniel Rissacher, Stephanie Schuckers, Laura Holsopple (Clarkson) and Patty Rissacher (Canton-Potsdam Hospital)

Improving User Perceptions of Identification/ Authentication Technologies: Empowering Users with Control to Reduce Privacy Concerns
David Wilson, Jeffrey Proudfoot, Ryan Schuetzler, Bradley Dorn, Joe Valacich (UA)

Probabilistic Modeling of Facial Recognition and Tracking Coverage in Smart Spaces Using Actuated Camera Networks
Karthik Dantu, Srirangaraj Setlur, Venu Govindaraju (UB)

Assessing the Match Performance of Non-Ideal Operational Facial Images Using 3D Image Data
Jeremy Dawson (WVU)

Video-based Face Recognition with Quality Measures
Guodong Guo

A Forensic Approach to Iris Analysis
Arun Ross (MSU), Matthew Valenti (WVU)

Interoperability of Fingerprint Spoof Detectors Across Different Sensors and Materials
Arun Ross (MSU), Stephanie  Schuckers (CU)

Validating the Representativeness of Samples from Sequestered Biometrics Data Sets (Phase II)
Mark Culp, Kenneth Ryan, Jeremy Dawson (WVU), Bojan Cukic (UNCC)

Context-aware Anomaly Detection in Internet of Biometric Things (IoBT)
Kantarci, M. Erol-Kantarci, Stephanie Schuckers (CU)

Development and Validation of a Standard Test of Deception Detection Accuracy
Judee Burgoon (UA), J. Hall (Northeastern U), Jim Marquardson  (UA/MTU)

The Effect of Perceived Behavior Surveillance on Voluntary Disclosure
Bradley Dorn, L. Spitzley (UA)

mBrain Password: Developing the EEG-based User Authentication System on Mobile Devices
Xu, Srirangaraj Setlur (UB)

Secure deep learning algorithms for biometric matching
Jesse Hartloff, Rohit Pandey (UB)

Summaries:

Longitudinal Study on Face Recognition

Anil K. Jain (MSU)

The purpose of this study is to consolidate longitudinal face images from children from multiple universities to study facial recognition with aging in children. The American Association of Orthodontists Foundation (AAOF) supported work to collect longitudinal craniofacial growth records (x-rays and photographs) at 9 institutions, as well as several other institutions with similar style projects. These images were captured at 6 month intervals up to 18 years of age.  The primary effort of this project would be to travel to each of these institutions and digitize the photographs in a consistent and controlled manner.

Automated Face Image Quality Assessment Using a Learning-based Approach

Guodong Guo (WVU)

Face recognition (FR) is extremely challenging in Biometrics research, especially for real-world applications. The performance of face recognition systems is tightly coupled with the quality of face images, which can vary significantly between imaging sensors, compression techniques, video frames, and/or image acquisition conditions/time. Face images may contain much more variations than fingerprint or iris images, in terms of quality. However, it is not well-studied yet on face image quality assessment [3]. It is demanding to develop a workable technique that can characterize and assess face image qualities automatically, quickly, and robustly. The quality measures may help the development of face recognition algorithms, and the selected good quality face images can be used to improve the recognition performance. Traditional approaches to face image quality measures are either using certain facial properties, e.g., resolution, pose, or illumination parameters, to quantify the quality [1], or comparing to “reference” images for discrepancies to measure the quality [2]. Both kinds of approaches are inflexible and lack of applicability [3] to varieties of face images in practice. In this project, we will develop a new technique called “learning to rank”, which can take advantage of machine learning techniques [3] and human perception capabilities [4]. This project focuses on face images, but the developed methods could be extended or adapted to other biometric modalities, e.g., fingerprint and iris.

 

Partial Face Matching Across the Infrared Band

Thirimachos Bourlai  (WVU), Xin Li (WVU), and Natalia Schmid (WVU) and Stephanie  Schuckers  (Clarkson)

Recent interest in heterogeneous biometric recognition is motivated by the ability of active infrared (IR) cameras to “see” at night, through fog, rain and under other challenging conditions. Matching partial heterogeneous face images to a gallery of visible images is a special case of heterogeneous biometric recognition. It is a challenging open research problem that is also well justified by many practical cases of face recognition, e.g. where face images of non-cooperative subjects are captured under difficult environments including variable distances and  illumination conditions. Our team has recently developed reliable solutions to heterogeneous matching of near frontal ( 20 degrees) face images [3, 4]. As we demonstrated in [5], the same algorithms can be applied to successfully match partial heterogeneous face regions. In this proposal, we will focus on further exploring the possibility of matching parts of faces captured at different bands and conditions. Three main tasks are stated in the “Experimental Plan” section below.

Noncooperative Human Identification by Face and Clothing

Gianfranco Doretto (WVU)

Searching for a person in a large video archive, possibly made of videos acquired by a network of surveillance cameras, is a fundamental task because it allows tracing down when and where a person was present in the scene. For instance, it allows saving hours of manual video inspection to find and trace the presence of criminals, like those in the Boston Marathon bombings. In such a scenario, quite often the human identification task can only rely on biometric traits acquired in unconstrained conditions from noncooperative subjects. More specifically, this means that both gallery and probe images of someone’s trait may be heavily corrupted by noise or other nuisance factors, such as the pose of an individual, or the illumination and occlusions. In such hostile conditions, human identification is typically attempted via face recognition (if enough image resolution is provided). Although it has improved significantly even with corrupted probe and gallery images, there are still plenty of unfriendly scenarios where face recognition lack robustness. Hence, there is a critical need to reduce those cases in order to increase the effectiveness of human identification for searching unconstrained video archives. The main goal of this project is to improve human identification by fusing the face modality with a pseudo-modality such as the clothing appearance of individuals. Using multiple biometric modalities is an effective approach for increasing identification robustness. Besides face, other viable options for the considered scenario include gait, and clothing appearance. The extraction and characterization of human gait is not practical, unless specific conditions are met, which are typically too restrictive. The clothing appearance is not a biometric trait. However, its effectiveness in matching the identity of people that between sightings have not changed their clothes has been demonstrated [1], which is why it was chosen. The objectives of this research include (i) developing a human identification approach jointly based on the face modality and the pseudo-modality of clothing appearance; (ii) improving our current single modality identification approaches based on face, and based on clothing appearance; (iii) demonstrating the performance of the joint face-clothing approach on our surveillance video archive.

 

A Cloud-based Biometric Service Model for Iris and Ocular Recognition Using a Smartphone

Arun Ross  (MSU),  Adjeroh  (WVU) and Matt Valenti  (WVU)

In this project, we will design methods to perform iris/ocular recognition in a smartphone environment using cloud-based services. Since smartphone cameras typically capture RGB (i.e., color) images of the iris – rather than near-infrared (NIR) images – the task of iris recognition is challenging, especially for dark-colored irides. Therefore, the proposed method will combine iris with periocular information for improving recognition accuracy. However, the processing of iris and periocular images can be computationally intensive. For example, iris segmentation, a critical step in iris recognition, can be a computational bottleneck. In a resource-limited environment, such computationally intensive procedures can be outsourced to the cloud. Since algorithms for iris and periocular recognition are constantly evolving and being improved, the solutions produced by developers need to be quickly made available to users with the developers themselves receiving financial incentives for innovation. Thus, in this project we will test a deployment model whereby developers upload their algorithm to the cloud and receive credit for their use. Depending upon the input iris/periocular image, a particular algorithm will be dynamically selected and credit rendered to the corresponding developer.  In this context, the project will explore interfaces for developers to upload their algorithmic solutions to the cloud. This biometrics-as-a-service approach will also make the resulting system independent of a specific smartphone or iris/ocular-processing algorithm.U-Accelerated

Key Frame Analysis for Face in Video Recognition

Chen Liu, Dan Rissacher (Clarkson)

Recently face in video recognition has gained great attention due to the need of such applications arisen from video surveillance and other purposes. Performing real-time face tracking on live stream video or on a large video repository already poses a great computational challenge. Adding face recognition function on top of this is simply making the situation even worse. In facing these challenges, we propose to construct a face-in-video recognition research prototype to meet this demand. At the first stage, we will utilize our GPU-accelerated face detection and quality analysis to extract the key-frame. At the second stage, we will then utilize the extracted key-frames for face recognition and matching. We anticipate this project will pave the way towards designing innovative face-in-video recognition systems that can be referenced by both industry and government agencies on such applications.

Biometric Aging in Children

Daniel Rissacher, Stephanie Schuckers, Laura Holsopple (Clarkson) and Patty Rissacher (Canton-Potsdam Hospital)

In this project we will collect biometrics (fingerprint, footprint, iris, face, hand vein, voice) in children age 0-18 years over multiple visits. The data will be analyzed towards multiple goals: 1) Earliest age of modality’s viability 2) Variability of modality with age 3) Developing models to account for age-variations 4) Development of age measurement metrics. The search for age-determination metrics will include data already obtained with a modality (e.g. iris features, vein measurements) and data that could easily be simultaneously collected (e.g. pupil or eyeball size).

Improving User Perceptions of Identification/ Authentication Technologies: Empowering Users with Control to Reduce Privacy Concerns

David Wilson, Jeffrey Proudfoot, Ryan Schuetzler, Bradley Dorn, Joe Valacich (UA)

New technologies designed to improve identification and authentication accuracies are continuously developed and adopted for use by government agencies conducting security operations. Interactions with these systems are often mandatory, raising privacy concerns about the data that is collected. These technologies are often designed and implemented with little emphasis on how user perceptions of these technologies may influence their performance. We propose that a new area of research should be emphasized, namely, how to reduce the anxiety/concern of individuals regarding the disclosed information. This project examines one such strategy: measuring the effect of restoring individuals’ perceived control over the disclosed information. The approach has been pilot-tested during the last 6-month CITeR period, and this proposal is designed as an expansion on that work with a broader sample to generate more generalizable findings.

 

Facial Rigidity During Deception

Judee Burgoon, Steven J. Pentland (UA), Nathan W. Twyman, (Missouri University of Science and Technology)

A previous CITeR project (Deception Detection Using Computer Expression Recognition) has demonstrated a lack of emotional facial intensity in deceivers during CIT questions. The lack of emotional intensity suggests a type of facial rigidity during deception. The proposed project will further investigate this phenomenon and determine if facial rigidity is a reliable cue to deception. As a further innovation, the investigation will test for facial behaviors that are more accurate cues to deception compared to emotional indicators. If so, facial rigidity can be a more feasible approach to deception detection in emotionally charged environments like airports and border crossings.

Probabilistic Modeling of Facial Recognition and Tracking Coverage in Smart Spaces Using Actuated Camera Networks

Karthik Dantu, Srirangaraj Setlur, Venu Govindaraju (UB)

We intend to build privacy-preserving facial recognition and tracking algorithms in smart spaces by using a coordinated network of pan-tilt-zoom(PTZ) cameras. A problem in achieving good tracking is their ability to cover the space at all times. Our proposal to achieve this objective is to build a probabilistic model for facial recognition in the field of view of the camera, and use this to build coverage models of the smart space. While there is a wealth of algorithms for facial recognition, and some study of camera coverage, there has been little work in combining them to cover smart spaces. We will demonstrate a coverage model that allows us to reason about coverage in smart spaces specifically for recognition and tracking of individuals while maintaining the privacy of individuals.

Assessing the Match Performance of Non-Ideal Operational Facial Images Using 3D Image Data

Jeremy Dawson (WVU)

Many operational face recognition scenarios involve the process of matching non-ideal probe images, captured at a variety of pose angles and potentially with part of the face occluded, against all images (e.g. mug shots) in a gallery database. The uncontrolled nature of the non-idealities present in operational data makes an assessment of the impact of pose angle on face recognition performance challenging. WVU has recently collected a large number of 3D face images using the 3dMDface system manufactured by 3dMD. These systems are capable of capturing high resolution 3D facial imagery, and are considered the standard reference systems for 3D anthropometric analysis. The 3dMDvultus software that accompanies the system allows for 2D still to be extracted from the 3D images at a wide variety of roll, pitch, and yaw angles. For this work, the ability to generate controlled pose-angle variation for a single subject within existing datasets will be used to assess the match performance of non-ideal facial images in commercial and custom face matchers.

Video-based Face Recognition with Quality Measures

Guodong Guo

Recently video-based face recognition (VFR) has become an active research in both academia and government agencies/industry partners, especially after the Boston Marathon Bombings happened in April 2013. Comparing to the traditional face biometrics, which mainly focuses on still image-based face recognition (SFR),  videobased  face recognition has special and important issues that need to be addressed. For example, the video frames may contain faces with low resolutions, low quality, and long distances (to the surveillance cameras), while the mugshot face photos (still images) are usually captured with high resolutions, high quality, and short distances (to the camera sensors). As a result, some classical methods for still image-based face recognition may not work well for video-based FR. On the other hand, there are usually a number of face images in videos for each subject, containing different resolutions, qualities, camera distances, head poses, illumination conditions, and motion blurs. This is very distinct from the mugshot photos where only a small number of face photos are available, which are often captured in controlled environments. A key observation is that NOT all these image frames are necessary or useful for video-based FR. Using all face images, including images of poor quality, can degrade the FR performance [1]. Then the question is raised: How to find appropriate image frames in face videos effectively for better recognition performance? A promising approach is to develop efficient quality measures in a broad sense in order to find good image frames with high qualities while remove frames with poor qualities. In this project, we propose to develop a workable method for improved face recognition in videos, based on effective quality and appropriateness measures.

A Forensic Approach to Iris Analysis 

Arun Ross (MSU), Matthew Valenti (WVU)

What else does your iris image reveal? This project will adopt an image-forensic framework to assess the various types of additional information that can be gleaned from an iris image: (a) information pertaining to the individual: e.g., age, gender, ethnicity, and other “soft biometric” information; (b) information pertaining to the ocular region: e.g., pupil dilation level, presence/absence of contact lens; (c) information pertaining to the iris anatomy: e.g., distribution of crypts, Wolfflin nodules, pigmentation spots; (d) information pertaining to the environment: e.g., sensor used, illumination employed, indoor/outdoor setting; and (e) information pertaining to health: e.g., stromal atrophy. The goal is to extract as much information as possible from an iris image to better characterize (a) the subject (e.g., “Is this iris from a male Asian?”), (b) the iris texture (e.g., “Does this iris have an unusual number of Wolfflin nodules”) and (c) the acquisition environment (e.g., “What type of illumination could have been used to acquire this iris image?”). This would result in a comprehensive description of the iris that can enhance the forensic utility of the trait thereby facilitating its use in a court of law. Such a description can also benefit existing iris recognition systems, for instance, by suggesting new ways to store and encrypt iris codes, and to guide the search process in large iris databases.

Interoperability of Fingerprint Spoof Detectors Across Different Sensors and Materials

Arun Ross (MSU), Stephanie  Schuckers (CU)

Most software-based fingerprint spoof detectors are learning-based, and are impacted by the materials and sensors used during the training stage. The performance of such detectors has been shown to significantly degrade when encountering spoof images from previously unseen materials or acquired using a different sensor (than what was used during the training phase). In this work, we will design methods to improve the generalization and interoperability ability of spoof detection algorithms across different materials and sensors. We will approach this problem in two different ways. In the first approach, we will use Bayesian Networks  to model the influence of spoof materials and sensors on a fingerprint image. This Bayesian Network will then be used to “factor out” sensor-specific and material-specific characteristics thereby improving the generalization ability of the spoof detector. In the second approach, we will design a one-class classifier that uses training data to model a live fingerprint, so that the ensuing classifier can detect spoofs made of any material and imaged by any sensor. To evaluate the efficacy of the proposed approaches, a significantly larger spoof dataset will be collected using a variety of materials (>15 materials) and sensors. The purpose will be to determine if there is a subset of spoof materials, which when used in the training stage, result in a spoof detector that is more “robust” when tested on previously unseen materials.

Validating the Representativeness of Samples from Sequestered Biometrics Data Sets (Phase II)

Mark Culp, Kenneth Ryan, Jeremy Dawson (WVU), Bojan Cukic (UNCC)

As the application of biometrics in identity management systems grows in scale, the accuracy of performance prediction requires the use of adequate population / data samples. Such samples can be assembled through careful data collection or through the informed selection from existing biometric data sets. The proposed research will concentrate on the second approach: validating the representativeness of a given test sample for the performance prediction of large, but possibly sequestered, identity management data set. Such test sample could be created from the sequestered data set, or come from outside sources. The representativeness of biometric samples for performance prediction, somewhat surprisingly, has not been directly addressed in research. Because biometric technologies deal with human subjects, collections are relatively expensive and rely upon convenience sampling [1]. But recent literature clearly indicates that the performance of biometric algorithms can be clearly biased by gender, age and even ethnic group participation in the sample [2]. We have exploited this observation in prior research with the goal of minimizing sample size for performance prediction of face recognition through stratified sampling [3]. A proactive method for performance prediction from random sampling using the variability control technique generally known as reliability assessment charts allows biometric performance assessment [4], but it does not guide the selection of adequate test samples – human subjects. Regardless of the approach, if bias is present in the test sample, the statistical projections of system performance will likely offer misleading results and lead to inaccurate conclusions. For this project, we have been offered access to a large biometric repository of identity management data, Data Set A, in three modalities: face, fingerprints and iris. We will be work together with the custodians of the sequestered US Government identity management data set, the Data Set B. For each modality, starting with face, we plan to identify the biometric quality and population factors that drive the distribution of match scores in data sets A and B. We will achieve that by using a minimal sample size from sequestered Data Set B. We will develop a statistical definition of biometric data set “representativeness” with respect to the ability to predict performance within the specified error bounds.

Context-aware Anomaly Detection in Internet of Biometric Things (IoBT)

Kantarci, M. Erol-Kantarci, Stephanie Schuckers (CU)

This project couples biometric and context-aware authentication techniques to protect mobile applications from unauthorized access by malicious users. Connected user devices, and mobile applications that run on those devices are prone to security vulnerabilities as a result of unauthorized access. This project strengthens existing password and fingerprint based authentication by incorporating knowledge based spatiotemporal abstraction to obtain the context such as location, use of cellular data, use of WiFi data, number, timestamp and duration of received/placed calls and SMSs. The developed anomaly detection software will enable identifying fraud using context-awareness and biometrics on mobile platforms.

Development and Validation of a Standard Test of Deception Detection Accuracy

Judee Burgoon (UA), J. Hall (Northeastern U), Jim Marquardson  (UA/MTU)

Research on deception detection has shown that judges, even professionals, are generally poor at detecting deception. Whereas a wide range of standardized tests are available to measure judgment accuracy in such skills as emotion recognition, emotional intelligence and nonverbal sensitivity, no standardized measure is available to quantify deception detection accuracy. Needed is a standard test that can be used to compare research findings and to certify proficiency of, for example, law enforcement and security personnel after undergoing training. Such a test could also be used for personnel selection. This research will apply psychometric methods to developing and validating a robust, multimodal deception detection test.

 

The Effect of Perceived Behavior Surveillance on Voluntary Disclosure

Bradley Dorn, L. Spitzley (UA)

New screening technologies designed to investigate both the content and associated behavioral cues communicated during a screening are becoming more common. As individuals become more aware of the existence of these technologies and the types of analyses they use, how will this affect the ways that individuals interact with these systems? We are proposing to research the effect that soft- biometric surveillance salience has on individuals’ willingness to voluntarily disclose information.

 

mBrain Password: Developing the EEG-based User Authentication System on Mobile Devices

Xu, Srirangaraj Setlur (UB)

Our team is involved in an on-going NSF-supported (SaTC) project, Brain Password, to investigate a secure and trustworthy user authentication approach based on non-volitional brain behaviors. Currently, the Brain Password system is built, tested and evaluated on the PC systems. In this project, we propose to transfer the Brain Password system into mobile platforms. The new technical efforts in the project include: 1) implementing the EEG user authentication system on the mobile platform; 2) effectively removing the EEG artifacts introduced by mobility; 3) user evaluation.

Secure deep learning algorithms for biometric matching

Jesse Hartloff, Rohit Pandey (UB)

One of the major obstacles in biometric template protection is the inability to accurately estimate the distribution of the biometric features. Many security proofs require these features to be uniformly distributed and uncorrelated which is far from the truth. For this project, we propose to address this issue by applying deep learning to transform the features into a uniform code that is then hashed and stored on a server. In addition to being able to store secure template, we have seen an increase in accuracy compared to other template security schemes for facial recognition. For this project, we will test this method of secure deep learning on face and fingerprint biometrics.