Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification

Authors: Yang, Y., Wang, G., Tiwari, P., Pandey, H.M. and Lei, Z.

Journal: IEEE Transactions on Neural Networks and Learning Systems

eISSN: 2162-2388

ISSN: 2162-237X

DOI: 10.1109/TNNLS.2021.3128269

Abstract:

Recently, unsupervised cross-dataset person reidentification (Re-ID) has attracted more and more attention, which aims to transfer knowledge of a labeled source domain to an unlabeled target domain. There are two common frameworks: one is pixel-alignment of transferring low-level knowledge, and the other is feature-alignment of transferring high-level knowledge. In this article, we propose a novel recurrent autoencoder (RAE) framework to unify these two kinds of methods and inherit their merits. Specifically, the proposed RAE includes three modules, i.e., a feature-transfer (FT) module, a pixel-transfer (PT) module, and a fusion module. The FT module utilizes an encoder to map source and target images to a shared feature space. In the space, not only features are identity-discriminative but also the gap between source and target features is reduced. The PT module takes a decoder to reconstruct original images with its features. Here, we hope that the images reconstructed from target features are in the source style. Thus, the low-level knowledge can be propagated to the target domain. After transferring both high- and low-level knowledge with the two proposed modules above, we design another bilinear pooling layer to fuse both kinds of knowledge. Extensive experiments on Market-1501, DukeMTMC-ReID, and MSMT17 datasets show that our method significantly outperforms either pixel-alignment or feature-alignment Re-ID methods and achieves new state-of-the-art results.

Source: Scopus

Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification.

Authors: Yang, Y., Wang, G., Tiwari, P., Pandey, H.M. and Lei, Z.

Journal: IEEE Trans Neural Netw Learn Syst

Volume: PP

eISSN: 2162-2388

DOI: 10.1109/TNNLS.2021.3128269

Abstract:

Recently, unsupervised cross-dataset person reidentification (Re-ID) has attracted more and more attention, which aims to transfer knowledge of a labeled source domain to an unlabeled target domain. There are two common frameworks: one is pixel-alignment of transferring low-level knowledge, and the other is feature-alignment of transferring high-level knowledge. In this article, we propose a novel recurrent autoencoder (RAE) framework to unify these two kinds of methods and inherit their merits. Specifically, the proposed RAE includes three modules, i.e., a feature-transfer (FT) module, a pixel-transfer (PT) module, and a fusion module. The FT module utilizes an encoder to map source and target images to a shared feature space. In the space, not only features are identity-discriminative but also the gap between source and target features is reduced. The PT module takes a decoder to reconstruct original images with its features. Here, we hope that the images reconstructed from target features are in the source style. Thus, the low-level knowledge can be propagated to the target domain. After transferring both high- and low-level knowledge with the two proposed modules above, we design another bilinear pooling layer to fuse both kinds of knowledge. Extensive experiments on Market-1501, DukeMTMC-ReID, and MSMT17 datasets show that our method significantly outperforms either pixel-alignment or feature-alignment Re-ID methods and achieves new state-of-the-art results.

Source: PubMed

Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification

Authors: Yang, Y., Wang, G., Tiwari, P., Pandey, H.M. and Lei, Z.

Journal: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

eISSN: 2162-2388

ISSN: 2162-237X

DOI: 10.1109/TNNLS.2021.3128269

Source: Web of Science (Lite)

Pixel and Feature Transfer Fusion for Unsupervised Cross-Dataset Person Reidentification.

Authors: Yang, Y., Wang, G., Tiwari, P., Pandey, H.M. and Lei, Z.

Journal: IEEE transactions on neural networks and learning systems

Volume: PP

eISSN: 2162-2388

ISSN: 2162-237X

DOI: 10.1109/tnnls.2021.3128269

Abstract:

Recently, unsupervised cross-dataset person reidentification (Re-ID) has attracted more and more attention, which aims to transfer knowledge of a labeled source domain to an unlabeled target domain. There are two common frameworks: one is pixel-alignment of transferring low-level knowledge, and the other is feature-alignment of transferring high-level knowledge. In this article, we propose a novel recurrent autoencoder (RAE) framework to unify these two kinds of methods and inherit their merits. Specifically, the proposed RAE includes three modules, i.e., a feature-transfer (FT) module, a pixel-transfer (PT) module, and a fusion module. The FT module utilizes an encoder to map source and target images to a shared feature space. In the space, not only features are identity-discriminative but also the gap between source and target features is reduced. The PT module takes a decoder to reconstruct original images with its features. Here, we hope that the images reconstructed from target features are in the source style. Thus, the low-level knowledge can be propagated to the target domain. After transferring both high- and low-level knowledge with the two proposed modules above, we design another bilinear pooling layer to fuse both kinds of knowledge. Extensive experiments on Market-1501, DukeMTMC-ReID, and MSMT17 datasets show that our method significantly outperforms either pixel-alignment or feature-alignment Re-ID methods and achieves new state-of-the-art results.

Source: Europe PubMed Central