2016 Projects

Analysis of Vocal and Eye Behavior Countermeasures for Automated Border Control Applications
Aaron Elkins (UA/SDSU), Dmitry Gorodnichy (CBSA), Bradley Walls (UA), Jeffery Proudfoot (Bentley), Nathan Twyman (MST), & Judee Burgoon (UA).

Temporal Analysis of Non-Contact Biometric Modalities to Detect Document Fraud
Jay Nunamaker, Brad Walls (UA

Microfluidic 3D Capillary Network Test Phantom for Subdermal Vascular Imaging
Kwang Oh (UB)

Accelerated Rapid Face in Video Recognition
Chen Liu, Stephanie Schuckers (CU)

Semantic Face Index
Don Adjeroh, Gianfranco Doretto, Jeremy Dawson (WVU/

Understandable Face Image Quality Assessment
Guodong Guo (WVU)

Cross Audio-to-Visual Speaker Identification in the Wild Using Deep Learning
Jeremy Dawson, Nasser Nasrabadi (WVU)

Nonlinear Mapping using Deep Learning for Thermal-to-visible Night-Time Face Recognition
Thirimachos Bourlai (WVU), Nasser Nasrabadi (WVU), Lawrence Hornak (UGA)

Validation of Biometric Identification of Dairy Cows based on Udder Vein Images
Stephanie Schuckers, Sean Banerjee (CU)

Biometric-Enabled Interview-Assisting Traveller Screening (IATS) Technology for True Identity and Intention Recognition in Automated Border Control (ABC)
Burgoon (UA), Nunamaker (UA), Gorodnichy (CBSA), Erickson (UA)

Data Mining of Social Media Sites to Create Customized Diagnostic Questions for Deception Detection
B. Walls, J. Nunamaker (UA)

Evolutionary Identification and Tracking in Public Spaces
Devansh Arpit, Karthik Dantu, Srirangaraj Setlur, Venu Govindaraju

Incorporating Biological Models in Iris Anti-Spoofing Schemes
T. Bourlai, A. Clark, A. Ross, S. Schuckers

Large Scale Face Recognition
Guodong Guo (WVU)

Longitudinal Collection of Child Face Photographs
S. Schuckers

Metagenomic Data Analytics for Human Identification
Jeremy M. Dawson, Donald Adjeroh

Recognizing Faces from Low Quality Photos
Guodong Guo (WVU), Xin Li

UAS (Unmanned Aerial System) Person Identification for Tracking, Following or Autonomous Enforcement of Exclusionary Safety Zones Around Humans
Dan Rissacher (CU), Nils Napp, Srirangaraj Setlur, Venu Govindaraju (UB)

Use of body-worn video cameras for facial recognition used by law enforcement and military
N. M. Nasrabadi (WVU), G. Doretto

 

 


Analysis of Vocal and Eye Behavior Countermeasures for Automated Border Control Applications
Aaron Elkins (UA/SDSU), Dmitry Gorodnichy (CBSA), Bradley Walls (UA), Jeffery Proudfoot (Bentley), Nathan Twyman (MST), & Judee Burgoon (UA).
This project will be a follow-on to the current IATS4ABC CITeR project, conducting secondary data analysis on the vocal and eye tracking data that was gathered during the testing of AVATAR at CBSA. It will include statistical analysis of the distributions of the measures, their degree of homogeneity, and whether anomalies can discriminate between innocent and guilty passengers.


Temporal Analysis of Non-Contact Biometric Modalities to Detect Document Fraud
Jay Nunamaker, Brad Walls (UA
Our society is in the midst of a fraudulent document crisis. According to a January 2016 Politico article, Europe’s trade in fraudulent (i.e. forged and stolen) passports is so out of control that the U.S. has given five European Union (EU) countries the ultimatum to act or risk losing visa-free travel rights. This is not a new problem, in fact, the same Politico article presents data from Interpol that illustrates lost and stolen travel documents numbered 15 to 16 million in 2010 with the problem growing to greater than 50 million in 2015. This project will evaluate non-contact biometric modalities coupled with an automated interviewing system that could be used in numerous border crossing scenarios to assist with the detection of fraudulent travel documentation.


Microfluidic 3D Capillary Network Test Phantom for Subdermal Vascular Imaging
Kwang Oh (UB)
Leveraging the Sensors and MicroActuators Learning Lab’s (SMALL at University at Buffalo) expertise in both microfluidics and test phantoms we will create a physiologically accurate model of the human finger. It will be acoustically, electrically, and optically equivalent to that of the human finger. This finger test phantom will include dermotographic features, such as ridge valley structures, digital arteries, bone, fat, muscle, and a fully functioning 3 dimensional (3D) capillary network. Subdermal vascular networks can be used another form of a biometric providing a high level of security. Being able to have a controlled test phantom (i.e., blood flow, heart rate, bone structure, fat and muscle thickness, as well as a known capillary design) will allow for advanced sensor/algorithm testing, validation, and calibration.


Accelerated Rapid Face in Video Recognition
Chen Liu, Stephanie Schuckers (CU)
Recently face in video recognition has gained great attention due to the need of such applications arisen from video surveillance and other purposes. Performing real-time face tracking on surveillance videos in live stream and perform face recognition at the same time poses a great computational challenge. In facing this challenge, we propose to attack this problem from both algorithm design and hardware acceleration sides. We propose an innovative key-frame extraction algorithm based on improved quality analysis on facial components from the video stream. Then we will utilize the face(s) extracted from key-frames for face recognition and matching. We will focus on the quality analysis of faces in frame as this is a critical step towards improving the recognition speed, since good quality analysis will reduce the field of what needs to be processed by the face recognition engine. We will employ Graphic Processing Unit (GPU) to accelerate both the key frame extraction and face recognition algorithms through extracting the thread-level and data-level parallelism in order to meet real-time requirement on mobile platform and fast-than-real-time processing speed on server platform. We anticipate this project will pave the way towards designing innovative face-in-video recognition systems that can be referenced by both industry and government agencies on such applications.


Semantic Face Index
Don Adjeroh, Gianfranco Doretto, Jeremy Dawson (WVU/
With the increasing use of cheap cameras (e.g., available in cell phones) coupled with the ease of distribution, there is an exponential increase in the size of available face databases. An important key challenge is in the ability to search over these large databases in order to identify a face that may be of interest. This is significantly compounded in the case of face-in- the-wild, where the query face, the database faces, or both might have been captured in
a non-constrained, non-ideal environment, with no control on the image quality, the illumination conditions, pose, partial occlusion, etc. We propose a human-readable indexing scheme to reduce these problems, and thus support a rapid access in very large scale face databases. The index is based on descriptors that are understandable to humans, which makes the approach easy to use, and suitable for law enforcement applications.


Understandable Face Image Quality Assessment
Guodong Guo (WVU)
The purpose Face recognition (FR) performance is affected significantly by the face image qualities, especially in real-world applications. Face image qualities vary significantly because of different imaging sensors, compression techniques, video frames, and/or image acquisition conditions/time. It is very challenging to assess face image qualities automatically, quickly and precisely in real world images [1][2]. Recent studies [2][3] have shown that a learning-based paradigm can do better than the traditional heuristic methods, however, all these approaches can only give a single quality score as the “output,” e.g. 90, for an input face image. The “single-value quality score” cannot tell much information to communicate to human assessors. Further, many issues have not been addressed yet, e.g., What does a quality score mean? How to interpret a quality score with imaging conditions? Why a face image has a quality score of 50 rather than 60? How well the quality scores characterize the real face image qualities? Can more useful cues (e.g., levels of details) be acquired to develop a complete representation for face image quality assessment? In this project, we propose a new paradigm, called understandable face image quality assessment, to address the issues in quality assessment of face images. We believe that the new paradigm can give a better solution for quality assessment. The objective of this project is to explore a new paradigm and develop a new approach for face image quality assessment with rich information, making quality measures understandable, believable, and more accurate.


Cross Audio-to-Visual Speaker Identification in the Wild Using Deep Learning
Jeremy Dawson, Nasser Nasrabadi (WVU)
Speaker recognition technology has achieved significant performance for some real-world applications. However, the performance of speaker recognition is still greatly degraded when used in noisy environments. One approach to improve speech recognition/identification is by combining video and audio sources to link the visual features of lip motion with vocal features, two modalities which are correlated and convey complementary information. In this project we are not interested in a baseline improvement in speaker recognition, but instead, we are interested in identifying an individual face from a coupled video/audio clip of several individuals based on data collected in an unrestricted environment (wild). For this effort, we are proposing to use visual lip motion features for a face in a video clip and the co-recorded audio signal features from several speakers to identify the individual who uttered the audio recorded along with the video. To solve this problem, we are proposing to use an auto-associative deep neural network architecture which is a data-driven model and does not model phonemes or visemes (the visual equivalent of a phoneme). A speech-to-video auto-associative deep network will be used where the network has learned to reconstruct the visual lip features given only speech features as the input. The visual lip feature vector generated by our deep network for an input test speech signal will be compared with a gallery of individual visual lip features for speaker identification. The proposed speech-to-video deep network will be trained with our current WVU voice and video training dataset using the corresponding audio and video features from individuals as inputs to the network. For the audio signal we will use the Mel-frequency cepstral coefficients (MFCC), and for video, we will extract static and temporal visual features of the lip motion.


Nonlinear Mapping using Deep Learning for Thermal-to-visible Night-Time Face Recognition
Thirimachos Bourlai (WVU), Nasser Nasrabadi (WVU), Lawrence Hornak (UGA)
Infrared (IR) thermal cameras are important for night-time surveillance and security applications. They are especially useful in nighttime scenarios when the subject is far away from the camera. The motivation behind thermal face recognition (FR) is the need for enhanced intelligence gathering capabilities in darkness where active illumination is impractical and when surveillance with visible cameras is not feasible. However, the acquired thermal face images have to be identified using the images from existing visible face databases. Therefore, cross-spectral face matching between the thermal and visible spectrum is a much desired capability. In cross-modal face recognition, identifying a thermal probe image based on a visible face database is especially difficult because of the wide modality gap between thermal and visible physical phenomenology. In this project we address the cross-spectral (thermal vs. visible) and cross-distance (50 m, 100 m, and 150 m vs. 1 m standoff) face matching problem for night-time FR applications. Previous research activities [1]-[2] have mainly concentrated on extracting hand-crafted features (i.e., SIFT, SURF, HOG, LBP, wavelets, Gabor jets, kernel functions) by assuming that the two modalities share the same extracted features. However, the relationship between the two modalities is highly non-linear. In this project we investigate non-linear mapping techniques based on deep neural networks (DNN) learning procedures to bridge the modality gap between visible-thermal spectrums while preserving the subject identity information. The nonlinear coupled DNN features will be used by a FR classifier.


Validation of Biometric Identification of Dairy Cows based on Udder Vein Images
Stephanie Schuckers, Sean Banerjee (CU)
The purpose of this proposal is to develop a biometric recognition system to identify cows based on a NIR image of the vein pattern on the udder. Safety of food is of critical concern. In dairy cows, the milk from dairy cows which are infected and/or on antibiotics must be separated from the other milk from the herd, requiring recognition of a specific cow. Currently, methods such as RF ID tags and ankle bands are used to identify cows. However, ankle bands can be dislodged. RF id tags are read when cows enter the milking area, but the cows sometimes get out of order when entering the stalls. Confirmation of identity once the cow is in the stall would be useful, but at that point the RF ID tag is far away from the back of the cow where it is being milked. A biometric collected from the back of the cow could be used to confirm identity of the cow at the time the cow is milked. Iris patterns which have been studied in cows are also too far away. This study will consider the vein pattern of the udder for its biometric recognition properties. There has been extensive study and commercial products which are based on recognition of the vein pattern of the hand, finger, retina, and sclera of the eye. The knowledge of this field will be used for development of a system for udder vein recognition in cows. The focus of this project will be to assess uniqueness and permanence of cow udder veins to validate its potential usefulness for recognizing cows.


Biometric-Enabled Interview-Assisting Traveller Screening (IATS) Technology for True Identity and Intention Recognition in Automated Border Control (ABC)
Burgoon (UA), Nunamaker (UA), Gorodnichy (CBSA), Erickson (UA)
When travelers present themselves at ABC kiosks/gates, they provide answers to customs /security questions and their identity is authenticated using 1-to-1 verification against their picture in the passport, both of which (i.e. answers and identity) can be false, when they try to enter the country illegally. The project’s aim is to automate and improve the recognition of True Identity and Intention of travellers in such scenarios, through the development of biometric-enabled interview-assisting traveler screening (IATS) technology which combines biometric matching with biometric-based stress analysis. The project builds on the previous work of U. Arizona (AVATAR) in effort to bring this technology into operational ABC environment.


Data Mining of Social Media Sites to Create Customized Diagnostic Questions for Deception Detection
B. Walls, J. Nunamaker (UA)
Law enforcement has a rich history in training personnel at being adept interrogators. Well-developed training programs within these agencies make officers skilled at developing rapport with interviewed individuals enabling them to develop a tailored line of questioning based on case specific observations. These customized lines of questioning serve as powerful techniques for revealing truth and deception. The goal of this project is to enhance the diagnostic questioning capability of the AVATAR system by using social media (i.e. popular news feeds, Linked-In, Facebook, Twitter, etc.) to collect, in real-time, observational data on interviewed subjects. These observational data will then be used to craft customized diagnostic questions for deception detection based on relevant psychological research techniques and sound law enforcement principles.


Evolutionary Identification and Tracking in Public Spaces
Devansh Arpit, Karthik Dantu, Srirangaraj Setlur, Venu Govindaraju
Our interest lies in person tracking and identification in a real world scenario. Given a space instrumented with cameras, we aim at improving the identification of a person moving between the overlapping/non-overlapping views of these cameras. While using a previously developed model for recognizing faces under varying conditions (distance and orientation to a camera), we will improve the identification performance as the person moves through the space by tracking the person’s movements using motion prediction models which simultaneously evolves with the person’s identity.


Incorporating Biological Models in Iris Anti-Spoofing Schemes
T. Bourlai, A. Clark, A. Ross, S. Schuckers
Description: For over a decade, spoof detection (i.e., anti-spoofing) has been an important topic within the biometrics community. In the area of iris anti-spoofing, researchers have focused their attention on either the static properties of the iris tissue (e.g., for detecting contact lenses and printed spoofs) [1] or the dynamic aspects of the iris (e.g., pupil dilation) [2]. However, a comprehensive evaluation is unavailable since individual approaches address specific factors while ignoring others. This proposal aims to link these different approaches – empirically and biologically – to provide a comprehensive understanding while engineering universal security processes that mitigate vulnerabilities to spoofing. The results of this work can help advance iris recognition technology by improving current iris recognition algorithms as well as providing further insight into how anti-spoofing evaluations must be conducted on this topic. The three main tasks are stated in the “Experimental Plan” section below.


Large Scale Face Recognition
Guodong Guo (WVU)
Face recognition (FR) has evolved from controlled (e.g., mugshots) to unconstrained scenarios (e.g., recognition in the wild), and has seen some breakthroughs very recently. The most often used database for unconstrained FR is the LFW (labeled faces in the wild) dataset [1], which has been used extensively, resulting in about 50 publications (approaches), since its first release in 2007. Very high accuracies have been achieved on LFW, with the highest of 99.63%, reported in June 2015 [2], which even surpasses the human performance on LFW [3]. Researchers have realized that it is quite saturated for FR on the LFW. So, what is next? Or, is FR a solved problem? It is quite obvious that the FR problem has NOT been solved yet. But, how to challenge the FR algorithms to further improve? Or what are the next frontiers in FR? We believe that scalability is one big issue to address in the next few years. The LFW contains only about 10,000 face images of 5,000 subjects. One obvious limitation of LFW is the small number of face images per subject, lacking sufficient variations for each subject. And the number of subjects is not large either. To facilitate the study of large scale FR, we will assemble a large database, which should be larger and more challenging than LFW. For “large scale”, we mean both deeper (more face images for each subject containing much more variations) and broader (more subjects). We explore how well the face recognition methods can perform on a large scale database. This project advocates the study of face recognition at scale, the next grand challenge for FR.


Longitudinal Collection of Child Face Photographs
S. Schuckers
The purpose of this study is to consolidate longitudinal face images from children from multiple universities to study facial recognition with aging in children. The American Association of Orthodontists Foundation (AAOF) supported work to collect longitudinal craniofacial growth records (x-rays and photographs) at 9 institutions, as well as several other institutions with similar style projects. These images were captured at 6 month intervals up to 18 years of age. The primary effort of this project would be to travel to each of these institutions and digitize the photographs in a consistent and controlled manner.


Metagenomic Data Analytics for Human Identification
Jeremy M. Dawson, Donald Adjeroh
The availability of vast quantities of human and microbial genomic data can be exploited to uncover unique ways that the human genome impacts both internal and external bacterial communities, and conversely, how these communities may impact our own genome. Traits inherited from maternal and paternal genes impact our outward appearance or identity, and the field of epigenetics is uncovering certain regions of the human genome that indicate uniqueness beyond a readily observable physical appearance, such as personal habits and disease. Similarly, human bacterial colonies present in locations ranging from skin surfaces to the digestive tract are greatly impacted by health and environment, but may also be correlated to the host DNA as well based on epigenetic factors. Our long term goal is to bridge this gap by combining the various human genomic data sets with those from the microbiomes. By so doing, we can mine human and microbial genomic data for potential biomarkers that enable determination of an individual’s habits, health, and identity. The objective of this project is to study the potential association between the composition of the skin microbiome and certain genetic traits that exist in the human genome in order to build a framework for exploiting these associations for the determination of identity.


Recognizing Faces from Low Quality Photos
Guodong Guo (WVU), Xin Li
Description: Face recognition (FR) has made significant progress through years of research efforts. For instance, FR on the mugshots has achieved very high accuracies, since the mugshots are typically with good qualities, captured in controlled environments. While unconstrained face recognition has become very active in recent years [1], it is still not well-studied yet on FR from low quality photos. Typically the unconstrained face images are crawled from some web sites, e.g., Flicker, FaceBook, Google Images, etc., however, the acquired images are not necessarily low qualities. Further, online celebrity images may be captured by professional photographers with cooperative users and under good illumination conditions, and the high-end SLR cameras could be used. So, “unconstrained ≠ low quality,” although the unconstrained situation can result in some low quality images. In addition, low qualities may be caused by some other sources, such as sensor noise, motion blur, watermarks, and image compression, not necessarily unconstrained users. In this project, we look at the face recognition problem from the perspective of image quality. We believe that it is the face image quality that impacts the recognition performance, rather than simply saying unconstrained face recognition (vs. controlled). The objective of this project is to explore the face recognition performance on low quality photos, and develop a new method to improve the face recognition accuracy in dealing with low quality face images.


UAS (Unmanned Aerial System) Person Identification for Tracking, Following or Autonomous Enforcement of Exclusionary Safety Zones Around Humans
Dan Rissacher (CU), Nils Napp, Srirangaraj Setlur, Venu Govindaraju (UB)
UASs are a rapidly growing in use both in consumer and government applications. The ability to autonomously detect and track human beings would have many positive implications. In the consumer realm the primary goal is to allow a UAS to ensure safe distance from humans both to prevent harm from non-expert users and those with malicious intent (similar to current geographic no-fly regions based on GPS). In current government operations High Value Individuals (HVIs) are often tracked to maintain Positive Identification (PID) but this requires human-in-the- loop that adds costs and limits the number of individuals that can be tracked.


Use of body-worn video cameras for facial recognition used by law enforcement and military
N. M. Nasrabadi (WVU), G. Doretto
A body-worn camera is a small camera clipped to a soldier or a police officer’s uniform, or possibly to his head-gear. It can record video of the area in front of him and audio of the surrounding environment. The implementation of body-worn cameras in police force has gained increased attention in recent years. The footage from body-worn cameras can be used for biometric tasks performed instantly on the sensor (live) or off the sensor (remotely). Live face recognition (FR) can be used for day/night-time traffic stop scenarios by using smart body-worn cameras. However, the video footage from body-worn cameras presents new issues and challenges that do not exist or have not been addressed in traditional video FR using a hand held or stationary surveillance device. In the case of body-worn cameras, the video is very shaky due to rapid body movements of the officer and is continuously recording while the user performs normal operations, thus the user cannot capture all the relevant activities. The video may sometimes be not focused on the scene at large but rather at nearby objects. Video stabilization, background clutter/subtraction, video summarization and selecting representative key-frames for face detection and recognition are serious problems for wearable camera biometrics that are addressed in this project. Dedicated pre-processing algorithms, executable on suitable smart body-worn cameras, will be developed for face detection and recognition. Two low enforcement scenarios are considered: 1) mobile passive acquisition at a distance; 2) close up day-time and night- time traffic stop scenario. Use of a Zetronix Blue Line police cam, equipped with infra-red LED, will be investigated.