Gaming

Researchers at the University of Michigan and Netease Fuxi AI Lab Introduce 'MeInGame': A Deep Learning Technique To Automatically Create a Game Character Face from a Single Portrait – MarkTechPost


Source: https://arxiv.org/pdf/2102.02371.pdf

A team of researchers from Netease Fuxi AI Lab and the University of Michigan recently created a deep learning technique called ‘MeInGame’ that can automatically generate character faces by analyzing a single portrait of a person’s face. 

In the past few years, many computer scientists and developers have been trying to develop techniques that could make gaming experiences increasingly immersive, engaging, and realistic. Several 3D face reconstruction methods based on deep learning have been proposed, but only a few of them have applications in games. Present game character customization systems require players to manually adjust the face attributes to obtain the desired look and sometimes even have limited facial shape and texture.

Lately, some developers have also proposed methods to automatically customize a character’s face by analyzing images of real people’s faces. These methods are often not practical and do not consistently reproduce the faces they interpret in realistic ways.

Advertisement

The researchers have proposed an automatic character face creation method that predicts facial shape and texture from a single portrait. Additionally, it can be integrated into most existing 3D games.

Some automatic character customization is based on computational techniques known as 3D morphable face models (3DMMs). Few of them have been able to reproduce a person’s facial features with reasonable accuracy. However, how they represent geometrical properties and spatial relations frequently differs from the meshes utilized in most 3D videogames.

For reproducing the texture of a person’s face reliably, 3DMMs need to be trained on large datasets of images and related texture data. Compiling these datasets can be pretty time-consuming. Also, many times, these datasets do not contain authentic pictures collected in the wild. This deteriorates the model’s performance when presented with new data. 

MeInGame Model Overview
Source: https://arxiv.org/pdf/2102.02371.pdf

The team has trained their technique on a dataset of images captured in the wild. The team first reconstructed a 3D face from an input face photo based on the 3D morphable face model (3DMM) and (CNNs) convolutional neural networks. They then transferred the shape of the 3D face to the template mesh. The stated network takes the face photo and the unwrapped coarse UV texture map as input. It then predicts lighting coefficients and refined texture maps. 

Source: https://arxiv.org/pdf/2102.02371.pdf

They carried out a series of experiments to evaluate their deep learning technique. They compared the quality of the game characters it generated with that of character faces produced by the existing SOTA methods for automatic character customization. The proposed approach remarkably well outperformed the state-of-the-art techniques used in games, generating character faces that closely resembled those in input images. This method produces detailed and vivid game characters like the input picture and eliminates the impact of lighting and occlusions. 

Source: https://arxiv.org/pdf/2102.02371.pdf

Although this method achieves high accuracy in quantitative and qualitative metrics, when there are heavy occlusions (such as a hat), the method fails to produce reliable results. This is because the renderer fails to model the shadow created by the objects outside the head mesh. Nevertheless, soon this method could be integrated into several 3D videogames that would enable the automatic creation of characters closely resembling real people.

Paper: https://arxiv.org/pdf/2102.02371.pdf

GitHub: https://github.com/FuxiCV/



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.