Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models

Authors: KOSK, R., SOUTHERN, R., YOU, L., BIAN, S., KOKKE, W. and MAGUIRE, G.

Journal: Virtual Reality and Intelligent Hardware

Volume: 6

Issue: 5

Pages: 383-395

eISSN: 2666-1209

ISSN: 2096-5796

DOI: 10.1016/j.vrih.2024.08.006

Abstract:

Background: Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics. Methods: We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L1 and L2 norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives. Results: The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.

Source: Scopus

The data on this page was last updated at 06:17 on November 15, 2024.