3D selfies created using a convolutional neural network

3D selfie

University of Nottingham and Kingston University cooperation

Computer scientists at the University of Nottingham and Kingston University have developed technology that’s capable of producing 3D facial reconstruction from a single 2D image. Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression was led by PhD students Aaron Jackson and Adrian Bulat in the Computer Vision Laboratory in the School of Computer Science, who collaborated with their supervisor Georgios (Yorgos) Tzimiropoulos and Vasileios Argyriou, PhD from the School of Computer Science and Mathematics at Kingston University.

The technology has been developed using a convolutional neural network, which was trained on 80 000 faces to directly learn to output the 3D facial geometry from a single 2D image.

Convolutional networks were inspired by biological processes in which the connectivity pattern between neurons is inspired by the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.

The results will be presented at the International Conference on Computer Vision (ICCV) 2017 in Venice next month, where the Visage Technologies team will also be participating.