Self-supervised blur detection from synthetically blurred scenes

Authors: Alvarez-Gila, A., Galdran, A., Garrote, E. and van de Weijer, J.

Journal: Image and Vision Computing

Volume: 92

ISSN: 0262-8856

DOI: 10.1016/j.imavis.2019.08.008

Abstract:

Blur detection aims at segmenting the blurred areas of a given image. Recent deep learning-based methods approach this problem by learning an end-to-end mapping between the blurred input and a binary mask representing the localization of its blurred areas. Nevertheless, the effectiveness of such deep models is limited due to the scarcity of datasets annotated in terms of blur segmentation, as blur annotation is labor intensive. In this work, we bypass the need for such annotated datasets for end-to-end learning, and instead rely on object proposals and a model for blur generation in order to produce a dataset of synthetically blurred images. This allows us to perform self-supervised learning over the generated image and ground truth blur mask pairs using CNNs, defining a framework that can be employed in purely self-supervised, weakly supervised or semi-supervised configurations. Interestingly, experimental results of such setups over the largest blur segmentation datasets available show that this approach achieves state of the art results in blur segmentation, even without ever observing any real blurred image.

Source: Scopus

Self-supervised blur detection from synthetically blurred scenes

Authors: Alvarez-Gila, A., Galdran, A., Garrote, E. and van de Weijer, J.

Journal: IMAGE AND VISION COMPUTING

Volume: 92

eISSN: 1872-8138

ISSN: 0262-8856

DOI: 10.1016/j.imavis.2019.08.008

Source: Web of Science (Lite)

Self-supervised blur detection from synthetically blurred scenes.

Authors: Alvarez-Gila, A., Galdran, A., Garrote, E. and Weijer, J.V.D.

Journal: Image Vis. Comput.

Volume: 92

Source: DBLP

Self-supervised blur detection from synthetically blurred scenes

Authors: Alvarez-Gila, A., Galdran, A., Garrote, E. and Weijer, J.V.D.

Abstract:

Blur detection aims at segmenting the blurred areas of a given image. Recent deep learning-based methods approach this problem by learning an end-to-end mapping between the blurred input and a binary mask representing the localization of its blurred areas. Nevertheless, the effectiveness of such deep models is limited due to the scarcity of datasets annotated in terms of blur segmentation, as blur annotation is labour intensive. In this work, we bypass the need for such annotated datasets for end-to-end learning, and instead rely on object proposals and a model for blur generation in order to produce a dataset of synthetically blurred images. This allows us to perform self-supervised learning over the generated image and ground truth blur mask pairs using CNNs, defining a framework that can be employed in purely self-supervised, weakly supervised or semi-supervised configurations.

Interestingly, experimental results of such setups over the largest blur segmentation datasets available show that this approach achieves state of the art results in blur segmentation, even without ever observing any real blurred image.

http://dx.doi.org/10.1016/j.imavis.2019.08.008

Source: arXiv