Perceptual Adversarial Networks With a Feature Pyramid for Image Translation

Authors: Li, Z., Wu, M., Zheng, J. and Yu, H.

Journal: IEEE Computer Graphics and Applications

Volume: 39

Issue: 4

Pages: 68-77

eISSN: 1558-1756

ISSN: 0272-1716

DOI: 10.1109/MCG.2019.2914426

Abstract:

This paper investigates the image-to-image translations problems, where the input image is translated into its synthetic form with the original structure and semantics preserved. Widely used methods compute the pixel-wise MSE loss, which are often inadequate for high-frequency content and tend to produce overly smooth results. Concurrent works that leverage recent advances in conditional generative adversarial networks (cGANs) are proposed to enable a universal approach to diverse image translation tasks that traditionally require specific loss functions. Despite the impressive results, most of these approaches are notoriously unstable to train and tend to induce blurs. In this paper, we decompose the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass. The overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance.

https://eprints.bournemouth.ac.uk/35805/

Source: Scopus

Perceptual Adversarial Networks With a Feature Pyramid for Image Translation.

Authors: Li, Z., Wu, M., Zheng, J. and Yu, H.

Journal: IEEE Comput Graph Appl

Volume: 39

Issue: 4

Pages: 68-77

eISSN: 1558-1756

DOI: 10.1109/MCG.2019.2914426

Abstract:

This paper investigates the image-to-image translations problems, where the input image is translated into its synthetic form with the original structure and semantics preserved. Widely used methods compute the pixel-wise MSE loss, which are often inadequate for high-frequency content and tend to produce overly smooth results. Concurrent works that leverage recent advances in conditional generative adversarial networks (cGANs) are proposed to enable a universal approach to diverse image translation tasks that traditionally require specific loss functions. Despite the impressive results, most of these approaches are notoriously unstable to train and tend to induce blurs. In this paper, we decompose the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass. The overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance.

https://eprints.bournemouth.ac.uk/35805/

Source: PubMed

Perceptual Adversarial Networks With a Feature Pyramid for Image Translation.

Authors: Li, Z., Wu, M., Zheng, J. and Yu, H.

Journal: IEEE computer graphics and applications

Volume: 39

Issue: 4

Pages: 68-77

eISSN: 1558-1756

ISSN: 0272-1716

DOI: 10.1109/mcg.2019.2914426

Abstract:

This paper investigates the image-to-image translations problems, where the input image is translated into its synthetic form with the original structure and semantics preserved. Widely used methods compute the pixel-wise MSE loss, which are often inadequate for high-frequency content and tend to produce overly smooth results. Concurrent works that leverage recent advances in conditional generative adversarial networks (cGANs) are proposed to enable a universal approach to diverse image translation tasks that traditionally require specific loss functions. Despite the impressive results, most of these approaches are notoriously unstable to train and tend to induce blurs. In this paper, we decompose the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass. The overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance.

https://eprints.bournemouth.ac.uk/35805/

Source: Europe PubMed Central

Perceptual Adversarial Networks With a Feature Pyramid for Image Translation

Authors: Liu, Z., Wu, M., Zheng, J. and Yu, H.

Journal: IEEE Computer Graphics and Applications

Volume: 39

Issue: 4

Pages: 68-77

ISSN: 0272-1716

Abstract:

This paper investigates the image-to-image translations problems, where the input image is translated into its synthetic form with the original structure and semantics preserved. Widely used methods compute the pixel-wise MSE loss, which are often inadequate for high-frequency content and tend to produce overly smooth results. Concurrent works that leverage recent advances in conditional generative adversarial networks (cGANs) are proposed to enable a universal approach to diverse image translation tasks that traditionally require specific loss functions. Despite the impressive results, most of these approaches are notoriously unstable to train and tend to induce blurs. In this paper, we decompose the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass. The overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance.

https://eprints.bournemouth.ac.uk/35805/

Source: BURO EPrints