Joint pluralistic generation and realistic inpainting of occluded facial images
Authors: Shi, Y., Huang, D., Liu, J., Qu, J., Tang, W.
Journal: Pattern Recognition
Publication Date: 01/11/2026
Volume: 179
ISSN: 0031-3203
DOI: 10.1016/j.patcog.2026.113670
Abstract:Existing public datasets lack paired occluded and unoccluded facial images, preventing current inpainting methods from effectively restoring faces occluded by real-world objects, such as eyeglasses, hats, and respirators, significantly degrading the performance of downstream tasks such as face recognition, security systems, virtual try-ons, and social media applications. To address this challenge, we propose a landmark-based facial image generation method and construct a new facial occlusion dataset. CNN-based inpainting methods employ spatially invariant kernels that learn a deterministic one-to-one mapping, making them unable to generate multiple diverse and realistic results in occluded areas. Therefore, we propose PGRINet, a unified multi-branch image inpainting framework consisting of a pluralistic generation network, a coarse deterministic inpainting network, and a fine deterministic inpainting network, which simultaneously supports pluralistic generation and highly realistic deterministic inpainting. Moreover, to address the limitations of state-of-the-art methods in reconstructing large and complex facial occlusions, we propose a transformer-based contextual attention module (TCAM) that enhances long-range dependency modeling. Extensive experiments demonstrate that PGRINet not only preserves high restoration fidelity for severely occluded regions, but also generates diverse semantically consistent results, significantly outperforming state-of-the-art deterministic and pluralistic inpainting methods. PGRINet is further validated on real-world occluded facial images, demonstrating strong robustness and generalization capability in handling complex and various types of occlusions. Our source code and dataset are available at https://github.com/sys706/PGRINet.
Source: Scopus