Deep Neural Networks Do Not Recognize Negative Images
Mar 22, 2017
Deep Neural Networks have achieved remarkable performance on a variety of pattern-recognition tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. Unlike computers, humans have an overall sense of the objects and can recognize them in various forms such as different scales, orientations, colors or brightness. Since several recent papers have hypothesized that DNNs also develop an understanding about the objects based on the training data, as such that they are even able to generate new images, researchers from the University of Washington have tested the state-of-the-art DNNs with negative images and showed that the accuracy drops to the level of random classification.
Researchers have evaluated the performance of DNNs on negative images of the training data, i.e. images with reversed brightness where the lightest parts appear the darkest and vice versa. These complemented images are often easily recognizable by humans, but in the case of DNNs it seems that the neural network maps the inputs randomly to one of the output classes. This leads to the conclusion that the DNNs do not learn the structures or semantics of the objects, and cannot generalize the concepts, but rather memorize the inputs. It seems that current training methods should be improved, since learning models fail to semantically generalize.
- Hosseini, Hossein & Radha Poovendran (2016): Deep Neural Networks Do Not Recognize Negative Images, at: arXiv:1703.06857 [cs.CV]