One of new challenges for face identification system is a problem of the identification of a face from heavily altered facial images. Such a system can be useful in, for example, mugshot identifications for criminal identification. In a recent study, researchers used frontal views and compared them to “photoshopped” ones generated by face transformation online tools. The purpose of the study is to propose a model which would include features necessary for the accurate identification, and experiments have been conducted with real-case images to evaluate these performances.
Photo-editing tools are becoming an everyday routine not only for professional photo editing, but for user editing as well, before posting their photographs and selfies on social media, such as Facebook, Tumblr, or Instagram. The latter one gives a special challenge for face recognition algorithms, since Instagram filters tend to obscure certain facial points, lighting and give an end result which differs quite a lot from the real-life situation. That’s the reason why passport and similar official documents require non-altered photographs, and criminal mugshots are of high resolutions and under good lighting conditions, in order to make the future identification as correct as possible.
One old study of Uhl and Lobo produced a framework for recognizing facial images from police sketch artists and joining them with mugshots, but this is the first attempt of trying to find the real identity under altered photographs, which could be used not only for forensic purposes, but for different kinds of detection of altered images that have been proclaimed not retouched and similar. The first method used is the scale-invariant feature transform, which is an algorithm that detects and describes local features in images. In the first stage, potential interest points are identified by scanning the image over location and scale, and then the candidate key points are localized (and eliminated if unstable). After that, the dominant orientations for each key points was based on its location and found, and then a local image descriptor was built for each key point. The second method for verification is the principal component analysis, which is a statistical procedure that converts a set of observations of possibly correlated variables into a set of so-called principal components: linearly uncorrelated variables, where the first principal component accounts for as much of the variability in the data as possible. Each image is treated like a vector in a high-dimensional space, then PCA is applied to the set of images to give a new reduced subspace which illustrates the variability between the input images.
The aim of this face identification system was to measure similarities between query face images with all face images of the database, and retrieve the image which had the highest similarity. The dataset consisted of 100 original faces, and the researchers have manipulated 100 faces as well from the criminals database. The SIFT method achieved 92% accuracy, while the PCA, for verification purposes, had only 58% accuracy. This gives an insight into dealing with retouched images and face identification in the future, and an optimal framework, which could be improved using modern-day computer vision and machine learning advancements.
- Chennamma, H. R. and Lalitha Rangarajan (2012): Mugshot Identification from Manipulated Facial Images, at arxiv.org
- Uhl Jr., Robert G. and Niels da Vitoria Lobo (1996): “A Framework for Recognizing a Facial Image from a Police Sketch”, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition: pp. 586.