Efficient semi-supervised multiple feature fusion with out-of-sample extension for 3D model retrieval

This source preferred by Xiaosong Yang and Jian Jun Zhang

This data was imported from Scopus:

Authors: Ji, M., Feng, Y., Xiao, J., Zhuang, Y., Yang, X. and Zhang, J.J.

Journal: Neurocomputing

Volume: 169

Pages: 23-33

eISSN: 1872-8286

ISSN: 0925-2312

DOI: 10.1016/j.neucom.2014.12.112

© 2015 Elsevier B.V.. Multiple visual features have been proposed and used in 3-dimensional (3D) model retrieval in recent years. Since each visual feature reflects a unique characteristic about the model, they have unequal discriminative power with respect to a specific category of 3D model, and they are complementary to each other in model representation. Thus, it would be beneficial to combine multiple visual features together in 3D model retrieval. In light of this, we propose an efficient Semi-supervised Multiple Feature Fusion (SMFF) method for view-based 3D model retrieval in this paper. Specifically, We first extract multiple visual features to describe both the local and global appearance characteristics of multiple 2D projected images that are generated from 3D models. Then, SMFF is adopted to learn a more compact and discriminative low-dimensional feature representation via multiple feature fusion using both the labeled and unlabeled 3D models. Once the low-dimensional features have been learned, many existing methods such as SVM and KNN can be used in the subsequent retrieval phase. Moreover, an out-of-sample extension of SMFF is provided to calculate the low-dimensional features for the newly added 3D models in linear time. Experiments on two public 3D model datasets demonstrate that using such a learned feature representation can significantly improve the performance of 3D model retrieval and the proposed method outperforms the other competitors.

The data on this page was last updated at 05:01 on April 20, 2018.