Virtual assistants are gaining traction as we welcome them into many areas of our lives. However, they still often feel like speaking to a machine – empty and transactional. Giving them a familiar, friendly face, along with certain emotional intelligence, can make such interactions feel warmer and more natural. With that in mind, Digital Domain created Douglas – a photorealistic, autonomous digital human designed to break down the barriers in human-machine interactions.
About Digital Domain
Digital Domain has been creating experiences that entertain, inform and inspire since 1993. The company has brought artistry and technology to iconic films including “Titanic,” “Ready Player One,” “Avengers: Infinity War,” and more. Throughout the last quarter of a century, the studio has grown to lead the visual effects industry, expanding globally into digital humans, virtual production, previsualization and virtual reality, and adding commercials, game cinematics, and episodics to the robust film roster.
Creating a smart digital human
To be able to hold conversations that feel natural and easy, Digital Domain’s digital human had to make up for the senses that come naturally to humans. More precisely, it had to be able to identify individual people, determine their relative location in front of the camera, read their emotions, and know whether they are speaking. In other words, it needed a fast and accurate facial identification and tracking system.
visage|SDK proved to be the flexible, reliable and user-friendly solution Digital Domain needed to develop such a system and tailor it to their specific needs. Accurate data, combined with speed and lightweightness, provided the crucial basis for quality human-machine interactions.
visage|SDK helps Douglas determine facial location, i.e. whether someone is facing the camera. It also helps compute the 3D location and orientation of a user’s face to determine if they are looking at Douglas. This helps Douglas pinpoint the right time to start a conversation.
Furthermore, visage|SDK is used to assign identification numbers to faces. This way, if a new person introduces themself to Douglas, he can remember who they are later. This leads to engaging, more interpersonal conversations.
Finally, Digital Domain plans to use FaceAnalysis to determine how the users are feeling. This will help bring an emotionally intelligent response to real-time interactions, making Douglas’ interaction with humans as authentic as possible.
The future of human-machine interaction
By focusing on language processing, expressions, vision tracking, and more, Douglas can do everything from leading conversations to remembering people.
For example, let’s imagine that Douglas is installed at a kiosk. He could detect any person that approaches that kiosk, determine whether they are looking at him, and start interacting with them. He would also know if the user has walked away or how many people are speaking to him at any time. Finally, Douglas would be able to recognize people that he has already spoken with and access his memory of previous conversations with them.
Once finished, this technology could be easily installed both in the physical world and online. It can be especially helpful to companies that need a virtual avatar to answer questions or help customers with repetitive tasks.
In all these cases, people would be interacting with the vastness of the internet but through a relatable, human-like interface. Douglas brings an essential visual component to human-machine interaction and Visage Technologies’ technology remains an important part of Digital Domain’s Autonomous Digital Human roadmap.
The data generated from visage|SDK is extremely reliable compared to their competitors. visage|SDK enabled us to not only retrieve the transform and facial data points, but also use its facial recognition capability for a more interpersonal conversation with Douglas that other solutions didn’t offer.
visage|SDK is also user-friendly and capable of going in many directions, depending on the engineer’s creative vision and can be injected into other software and vice-versa.
Chad Reddick – Unreal Engine Engineer