Luminance domain-guided low-light image enhancement
Authors: Li, Y., Wang, C., Liang, B., Cai, F. and Ding, Y.
Journal: Neural Computing and Applications
Publisher: Springer Nature
ISSN: 0941-0643
DOI: 10.1007/s00521-024-09687-x
Abstract:Images captured under low-light conditions often suffer from low contrast, high noise, and uneven brightness due to nightlight, backlight, and shadow. These challenges make it difficult to use them as high-quality inputs for visual tasks. Existing low-light enhancement methods tend to increase overall image brightness, which can cause overexposure of normal-light areas after enhancement. To solve this problem, this paper proposes an Uneven Dark Vision Network (UDVN) that consists of two sub-networks. The Luminance Domain Network (LDN) uses Direction-aware Spatial Context (DSC) and Feature Enhancement Module (FEM) to segment different light regions in the image and output the luminance domain mask. Guided by this mask, the Light Enhancement Network (LEN) uses the Cross-Domain Transformation Residual block (CDTR) to adaptively illuminate different regions with various lights. We also introduce a new region loss function to constrain the LEN to better enhance the quality of different light regions. In addition, we have constructed a new low-light synthesis dataset (UDL) that is larger, more diverse, and includes uneven lighting states in the real world. Extensive experiments on several benchmark datasets demonstrate that our proposed method is highly competitive with state-of-the-art (SOTA) methods. Specifically, it outperforms other methods in light recovery and detail preservation when processing uneven low-light images. The UDL dataset is publicly available at: https://github.com/YuhangLi-li/UDVN.
Source: Manual