2005 Projects

Non‐Ideal Iris Recognition: Segmentation and Algorithms
Natalia Schmid (WVU), Stephanie Schuckers (Clarkson), Gamal Fahmy, Xin Li and Lawrence Hornak (WVU)

Multispectral and Multiframe Iris Analysis: Phase I
Lawrence Hornak, Arun Ross and Xin Li (WVU)

Cryptographic Protection for Sharable Biometric Test Databases
Bojan Cukic (WVU)

Multi‐spectral Fusion for Improved Face Recognition
Besma Abidi and David Page (UTK)

Robust Surveillance System Utilizing 2D Video and 3D Face Models
Anil K. Jain (MSU)

MUBI: Continued Development of a Multibiometric Performance Prediction Tool
Arun Ross and Bojan Cukic (WVU)

Generation of Synthetic Irises
Natalia Schmid, Arun Ross, Bojan Cukic (WVU) and Harshinder Singh, Dept. of Statistics (WVU)

Interoperability, Performance, and Fusion Issues in Fingerprint Sensors
Stephanie Schuckers, Sunil Kumar (Clarkson) and Arun Ross (WVU)

Robust 3‐D Face
Anil K. Jain (MSU)

Summaries:

Non‐Ideal Iris Recognition: Segmentation and Algorithms

Natalia Schmid (WVU), Stephanie Schuckers (Clarkson), Gamal Fahmy, Xin Li and Lawrence Hornak (WVU)

While current commercial iris recognition systems based on patented algorithms have the potential for high recognition performance, they suffer from the need for a highly constrained subject presentation. This work will explore techniques to adjust to non‐ideal images in order to explore methods for iris classification. Non‐ideal factors which impact iris recognition include off‐angle (horizontal and vertical), noise, rotation, etc. Furthermore, it has been empirically observed that robust segmentation of iris region is the most crucial factor that influences encoding and ultimately the performance of any decision making iris based systems designed thus far for both ideal and non‐ideal images. We will explore approaches for robust segmentation.

Multispectral and Multiframe Iris Analysis: Phase I

Lawrence Hornak, Arun Ross and Xin Li (WVU)

Multispectral imaging holds tremendous potential in improving the performance of iris systems while motion characteristics of iris captured by multiple frames are critical to liveness detection. The proposed work has three goals: 1) Enhancement of iris recognition performance through fusion of appropriate spectral band information; 2) Mapping of visible band information to IR enabling the interoperability of IR and visible iris images; and 3) Liveness detection based on detecting signatures such as melanin and elastic deformation of iris. In this initial Phase I study, we propose to experimentally obtain iris images from a collection of representative irises, complete initial analysis of the data, and explore algorithmic approaches in order to address the fundamental need for such a rich set of iris data and determine the merits of further exploration in these three areas.

Cryptographic Protection for Sharable Biometric Test Databases

Bojan Cukic (WVU)

Testing of biometric systems has proven to be a complicated task. Recent studies in CITeR and elsewhere demonstrate that large samples are needed to inspire statistical confidence in the validity and repeatability of biometric tests. As a result, many current projects collect their own datasets for the purpose of validating research results. Biometric data collection is a costly activity. Due to the use of human subjects and privacy concerns, Institutional Review Boards impose strict limitations regarding the ability to share biometrics data. The goal of this proposal is to develop cryptographic protocols that will provide the necessary levels of confidentiality of biometric test data. In addition to confidentiality, the protocol will ensure non‐repudiated access limited to the group of registered biometric database users. Registration procedure and X‐509 type public key certificates will be managed using the cryptographic server. The server will also generate symmetric encryption keys for database entries (biometric image and signal files) and user specific session keys. Key distribution algorithm will ensure that the minimal unit of biometric data sharing can be a single database entry (for example, an image), a subset of entries in a single modality or a multi‐modal collection. The last feature of the protocol will be the enforcement of de‐identification. The protocol will mask and automatically disallow sharing of humanly identifiable biometric modalities or any other database information that may compromise the privacy of volunteers.

Multi‐spectral Fusion for Improved Face Recognition

Besma Abidi and David Page (UTK)

The objective of this project is to develop an optimum fusion‐based multi‐spectral face recognition system by comparing the performances of various combinations of subsets from a large set of spectral responses using two commercially known FR engines (FaceIt and FaceVACS).

Robust Surveillance System Utilizing 2D Video and 3D Face Models

Anil K. Jain (MSU)

2D face images acquired from static cameras do not contain sufficient information for reliable user identification and difficult in complex environments. We propose to develop (i) a robust face acquisition system at a distance (>30 ft.) using a live video obtained from pan‐tilt‐zoom cameras, (ii) a face recognition system for the surveillance applications utilizing video and 3D face models, and (iii) a framework to integrate identity evidence with tracking cues (e.g., color and texture) to monitor subjects in challenging environments. The difficulties of user identification in surveillance applications with low quality face images will be overcome by utilizing rich information contained in videos and pose and lighting invariant 3D face models as well as the integration of identity evidence and tracking cues.

MUBI: Continued Development of a Multibiometric Performance Prediction Tool

Arun Ross and Bojan Cukic (WVU)

Multimodal and multibiometric techniques are migrating into mainstream applications. Many different decision level and score level fusion techniques have been described in the literature. But, when it comes to determining the performance benefits of a multimodal approach to specific application, system designers do not have the tools to evaluate and compare different fusion algorithms. An earlier CITeR project developed MUBI, an open source freely available software tool for performance evaluation of decision level fusion of multimodal and multibiometric systems. The only inputs that MUBI requires are the sets of genuine/impostor scores of biometric devices that are considered for system integration. We propose extending MUBI to include several major algorithms for score level fusion. This expansion will require a significant level of redesign of the existing tool. But the benefits of having such a tool clearly outweigh the cost of its development.

Generation of Synthetic Irises

Natalia Schmid, Arun Ross, Bojan Cukic (WVU) and Harshinder Singh, Dept. of Statistics (WVU)

Iris based identification gained considerable attention from the research community in parallel with its public acceptance. A number of iris recognition algorithms have been developed over the past few years. While most iris recognition systems demonstrate outstanding recognition performance when tested on databases of small or medium size, their performance cannot be guaranteed for large scale datasets (order of a few million). The largest database reported thus far consists of 350,000 iris images. In addition, large scale databases are private and thus are not accessible for the research community. As an alternative to physically collected database of iris images we propose to generate a large scale database of synthetic irises using nonparametric Markov Random Field (MRF). MRF is a leading method used in the field of texture synthesis and analysis [1]. Iris is rich in texture. The challenge lies in generating physically feasible irises. This problem can be reduced to generation of two or three different texture patterns for iris and solving boundary problem to unify (we say “stitch”) them. Together with designing a tool that is capable of generating a large scale database of irises, we propose to perform the following three complementary studies. (1) Synthesizing occluded parts of iris images (interpolation). (2) Generating iris images from “partial iris,” randomly located patches of a small size on iris image, (extrapolation). (3) Interpolation of low resolution iris image into a higher resolution image using texture synthesis techniques based on MRF. Performance measures will be developed to evaluate the results of these tasks.

Interoperability, Performance, and Fusion Issues in Fingerprint Sensors

Stephanie Schuckers, Sunil Kumar (Clarkson) and Arun Ross (WVU)

The problem of cross‐sensor matching has received limited attention in the literature. Furthermore, little work has been performed to allow comparisons of sensors independent of the underlying algorithm. The goal of this research is to assess and develop techniques to facilitate fingerprint sensor interoperability and to design methods to quantify sensor performance. The following issues will be investigated. (i) Enumerate and study the factors that impede fingerprint sensor interoperability; (ii) Design an image quality metric independent of sensors), that is correlated with performance; (iii) Develop methods to compare sensor performance considering image quality, interoperability, and identified factors including choice of sensor during enrollment; (iv) Devise sensor fusion schemes to increase population coverage, enhance interoperability and improve matching performance. (v) Develop fingerprint representation and matching schemes that would permit interoperability without compromising on performance.

Robust 3‐D Face

Anil K. Jain (MSU)

Limitations of 2D face recognition systems are now well‐known. These include difficulties with changes in lighting, pose and expression. This has motivated research on 3D face recognition. Our prototype 3D face recognition system (that combines 2D appearance and 3D surface information) achieves 98% accuracy (with 3D models of 100 subjects and ~ 600 test scans) in the presence of pose and lighting variations. But, the performance drops to 91% in the presence of changes in expression. In this project we propose to estimate the non‐linear deformation in the facial surface that is introduced due to various expression changes to make the 3D face recognitions systems more robust.