Colorization of fusion image of infrared and visible images based on parallel generative adversarial network approach

Authors: Chen, L., Han, J. and Tian, F.

Journal: Journal of Intelligent and Fuzzy Systems

Volume: 41

Issue: 1

Pages: 2255-2264

eISSN: 1875-8967

ISSN: 1064-1246

DOI: 10.3233/JIFS-210987

Abstract:

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of 'L', 'a' and 'b' of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.

https://eprints.bournemouth.ac.uk/35970/

Source: Scopus

Colorization of fusion image of infrared and visible images based on parallel generative adversarial network approach

Authors: Chen, L., Han, J. and Tian, F.

Journal: JOURNAL OF INTELLIGENT & FUZZY SYSTEMS

Volume: 41

Issue: 1

Pages: 2255-2264

eISSN: 1875-8967

ISSN: 1064-1246

DOI: 10.3233/JIFS-210987

https://eprints.bournemouth.ac.uk/35970/

Source: Web of Science (Lite)

Colorization of fusion image of infrared and visible images based on parallel generative adversarial network approach

Authors: Chen, L., Han, J. and Tian, F.

Journal: Journal of Intelligent and Fuzzy Systems

Volume: 41

Issue: 1

Pages: 2255-2264

ISSN: 1064-1246

Abstract:

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of 'L', 'a' and 'b' of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.

https://eprints.bournemouth.ac.uk/35970/

Source: BURO EPrints