Download CITeR Software
Periodically, CITeR makes available for download specific software developed as a part of its research portfolio which Center Members believe may be of significant value to the biometrics community. Regular CITeR Members have already received executable versions of these software tools and work with CITeR faculty in their use and advancement.
Software tool downloads contain documentation but are otherwise unsupported research versions. No guarantees are either expressed or implied regarding these public research versions.
- Greedy-DiM (Diffusion Morphs)
Greedy-DiM is a simple yet unreasonably effective face morphing algorithm that far surpasses previous representation-based morphing algorithms. Diffusion Morphs (DiM) are a recently proposed morphing attack that has achieved state-of-the-art performance for representation-based morphing attacks. However, none of the existing research on DiMs have leveraged the iterative nature of DiMs and left the DiM model as a black box, treating it no differently than one would a Generative Adversarial Network (GAN) or Variational AutoEncoder (VAE). The greedy strategy uses an iterative sampling process of DiM models which searches for an optimal step guided by an identity-based heuristic function. Compared against 10 other state-of-the-art morphing algorithms (using open-source SYN-MAD 2022 competition dataset), Greedy-DIM showed an MMPMR of 100%. Greedy-DiM significantly improves the effectiveness of DiM while still retaining the high-visual fidelity that is characteristic of DiM. -
Face Recognition (FR) systems have become a widely used biometric modality. Unfortunately, FR systems are highly vulnerable to a type of attack known as a face morphing attack (MA). The MA aims to break the one-to-one association between images and identities in the FR system by creating an image composed from two separate identities such that this single image triggers a false acceptance with these two identities. Thankfully, most MA produce significant artefacts which can be detected by a specifically trained model. We present a lightweight and efficient MobileNetV2 MA detector trained on a large database of high resolution morphs (1024 x 1024). The detector was trained against morphs generated by the StyleGAN, OpenCV, and FaceMorpher MAs on the FERET and FRLL datasets. The tool is available here on GitHub.
- Skin Reflectance Estimate Based on Dichromatic Separation (SREDS)
Face recognition (FR) systems have become widely used and readily available in recent history. However, differential performance between certain demographics has been identified within popular FR models. Skin tone differences between demographics can be one of the factors contributing to the differential performance observed in face recognition models. Skin tone metrics provide an alternative to self-reported race labels when such labels are lacking or completely not available e.g. large-scale face recognition datasets. In this work, we provide a further analysis of the generalizability of the Skin Reflectance Estimate based on Dichromatic Separation (SREDS) against other skin tone metrics and provide a use case for substituting race labels for SREDS scores in a privacy-preserving learning solution. Our findings suggest that SREDS consistently creates a skin tone metric with lower variability within each subject and SREDS values can be utilized as an alternative to the self-reported race labels at minimal drop in performance. Finally, we provide a publicly available and open-source implementation of SREDS to help the research community. The SREDS code is available at this link. The paper is here. - Synthetic Fingerprint Generation
The Clarkson Fingerprint Generator (CFG) utilizes progressive growth-based Generative Adversarial Networks (GANs). We demonstrate that the CFG is capable of generating realistic, high fidelity, pixels, full, plain impression fingerprints. Our results suggest that the fingerprints generated by the CFG are unique, diverse, and resemble the training dataset in terms of minutiae configuration and quality, while not revealing the underlying identities of the training data. We make the pre-trained CFG model and the synthetically generated dataset publicly available at this link. The paper is here. - Deep Slap Fingerprint Segmentation for Juveniles and Adults
Many fingerprint recognition systems capture four fingerprints in one image. In such systems, the fingerprint processing pipeline must first segment each four-fingerprint slap into individual fingerprints. Note that most of the current fingerprint segmentation algorithms have been designed and evaluated using only adult fingerprint datasets. In this work, we have developed a human-annotated in-house dataset of 15790 slaps of which 9084 are adult samples and 6706 are samples drawn from children from ages 4 to 12. Subsequently, the dataset is used to evaluate the matching performance of the NFSEG, a slap fingerprint segmentation system developed by NIST, on slaps from adults and juvenile subjects. Our results reveal the lower performance of NFSEG on slaps from juvenile subjects. Finally, we utilized our novel dataset to develop the Mask-RCNN based Clarkson Fingerprint Segmentation (CFSEG). Our matching results using the Verifinger fingerprint matcher indicate that CFSEG outperforms NFSEG for both adults and juvenile slaps. The CFSEG model is publicly available here. The paper is here. - VMBox for Iris
The OSIRIS (Open source for IRIS) is an open-source iris recognition system. The original OSIRIS implementation required installing and running ‘OpenCV version 2.4’ and Ubuntu (Linux OS) and does not support more recent versions. These older versions of OpenCV and Ubuntu are preconfigured in the OSIRIS VirtualBox Virtual Machine. Download zip file with VMBox here. The following is the github link which has additional information for running the VMbox: Github link to VMBox Information. - MSU – LatentAFIS – A system for identifying latent fingerprints
Over the last few years, with support from IARPA, we have been working on a fully automated, end-to-end latent fingerprint identification system. This code provides an end-to-end latent fingerprint search system, including automated region of interest (ROI) cropping, latent image preprocessing, feature extraction, feature comparison , and outputs a candidate list. Please find the source code on GitHub here: https://github.com/luannd/MSU-LatentAFIS. Our white paper describing the technical details of the system can be found on arxiv here: https://arxiv.org/abs/1812. 10213. Technical details of our algorithm can be accessed at https://arxiv.org/pdf/1812. 10213.pdf - WVU Channel-Robust Speaker Verification SoftwareSpeaker verification software robust to channel variation based on a 3D Convolutional Neural Network in the text-independent setting. The details of this work has appeared in the IEEE International Conference on Multimedia and Expo (ICME), 2018. The software can be found on Github repository. More details can be found in the following: Torfi, A., Dawson, J. and Nasrabadi, N.M., 2018, July. Text-independent speaker verification using 3d convolutional neural networks. In 2018 IEEE International Conference on Multimedia and Expo (ICME) (pp. 1-6). IEEE.
- WVU – Lip Reading – Cross Audio-Visual Recognition using 3D Convolutional Neural Networks
This code is aimed to provide the implementation for Coupled 3D Convolutional Neural Networks for audio-visual matching. Lip-reading can be a specific application for this work. This repository on Coupled 3D Convolutional Neural Networks for audio-visual matching on github and also CodeOcean is from the following paper:
“Audio–visual recognition (AVR) has been considered as a solution for speech recognition tasks when the audio is corrupted, as well as a visual recognition method used for speaker verification in multispeaker scenarios. The approach of AVR systems is to leverage the extracted information from one modality to improve the recognition ability of the other modality by complementing the missing information. The essential problem is to find the correspondence between the audio and visual streams, which is the goal of this paper. We propose the use of a coupled 3D convolutional neural network (3D CNN) architecture that can map both modalities into a representation space to evaluate the correspondence of audio–visual streams using the learned multimodal features. The proposed architecture will incorporate both spatial and temporal information jointly to effectively find the correlation between temporal information for different modalities.”
From: Torfi, A., Iranmanesh, S.M., Nasrabadi, N. and Dawson, J., 3D Convolutional Neural Networks for Cross Audio-Visual Matching Recognition. 2017. IEEE Access, 5, pp.22081-22091. - CU – Biometric Cryptosystem Software Implementation: This is our software implementation of the Cambridge Biometric Cryptosystem, an algorithm that confirms user authenticity through the use of an iris template. The user of the system is enrolled by providing an iris template and receiving a randomly generated key. These two inputs are used to generate two variables that are stored on a physical token that the user receives. The first of the generated variables is a hash of the original key. The second variable, called a locked template, is the result of performing an exclusive-or (XOR) function between the enrollment template and the result of putting the randomly generated key through Reed-Solomon and Hadamard encoding sequentially. When a user attempts to gain access to the system, the user provides the iris sample and the physical token. The locked template is XORed with the user’s sample template, producing the encoded key with errors introduced by the differences between the enrollment template and the sample template. This result is then put through Hadamard decoding, followed by Reed-Solomon decoding. If the person attempting to access the system is a valid user with the correct token, the result of the decoding will be the same as the original key, then the user is deemed valid and granted access. If someone is trying to access the system using someone else’s token, the result will be different; then the user is treated as an imposter and is not given access to the system. This software implementation in C is done by Charles McGuffey (currently with Carnegie Mellon University), under the direction of Drs. Chen Liu and Stephanie Schuckers (Clarkson University). The software and accompanying documentation are available for download at:https://github.com/tjuhh/biometric-cryptosystem/tree/master/Generic_Daugman_Cryptosystem
- PRESS: PRESS is a tool to help researchers to analyze data collected on biometric authentication devices. It has been created at St. Lawrence University, through generous funding from CITeR. PRESS is designed to simplify the analysis of bio-authentication data by making it easy for the user to create many of the basic statistical summaries in common usage. These include: confidence intervals, genuine vs. imposter histograms, EER calculation and ROC curves. In addition PRESS has a tool for determining the number of individuals that need to be tested under certain specified conditions. PRESS handles data either in text or Excel format. It has been designed by Dr. Michael Schuckers (St. Lawrence University) and coded by Nona Mramba (University of Maryland) and C. J. Knickerbocker (St. Lawrence University). PRESS and accompanying documentation will be available at the following website: http://myslu.stlawu.edu/~msch/biometrics/book/software.html
- WVU MUBI: MUBI addresses the growing interest in the prediction and evaluation of performance of systems which integrate multiple biometric devices. Starting with the information about matching scores of each individual biometric device, MUBI generates their respective ROC curves. Then, it calculates the whole range of performance characteristics (genuine accept vs. false accept rates) of different multibiometric system configurations. Finally, the tool assists users with the selection of cut-off scores of each individual device, such that they meet the desired performance goal. The tool was developed under the direction of Dr. Bojan Cukic of West Virginia University. Its JAVA implementation was completed by Martin Mladenovski, currently with Microsoft. MUBI tool and the accompanying documentation will be made available for download at http://www.csee.wvu.edu/~cukic/MUBI/