TY - GEN
T1 - Learning discriminative and shareable patches for scene classification
AU - Ni, Shoucheng
AU - Zhangg, Qieshi
AU - Kamata, Sei Ichiro
AU - Zhang, Chongyang
N1 - Funding Information:
This work was partly funded by NSFC (No.61571297, No.61527804, No.61420106008) and China National Key Technology R&D Program (No. 2012BAH07B01).
Publisher Copyright:
© 2016 IEEE.
PY - 2016/5/18
Y1 - 2016/5/18
N2 - This paper addresses the problem of scene classification and proposes learning discriminative and shareable patches (LDSP) method. The main idea of learning discriminative and shareable patches is to discover patches that exhibit both large between-class dissimilarity (discriminative) and large within-class similarity (shareable). A novel and efficient re-clustering, based on co-occurrence relationship of first-step clustering, is proposed and conducted to further enhance the visual similarity of patches within each cluster. In order to establish appropriate criteria for selecting desired patches, a condensed representation of image features called feature epitome is introduced. In the classification, a patch feature involving pre-trained convolutional neural network model is investigated. The experimental result outperforms existing single-feature methods on MIT 67 scene benchmark in term of mean Accuracy Precision.
AB - This paper addresses the problem of scene classification and proposes learning discriminative and shareable patches (LDSP) method. The main idea of learning discriminative and shareable patches is to discover patches that exhibit both large between-class dissimilarity (discriminative) and large within-class similarity (shareable). A novel and efficient re-clustering, based on co-occurrence relationship of first-step clustering, is proposed and conducted to further enhance the visual similarity of patches within each cluster. In order to establish appropriate criteria for selecting desired patches, a condensed representation of image features called feature epitome is introduced. In the classification, a patch feature involving pre-trained convolutional neural network model is investigated. The experimental result outperforms existing single-feature methods on MIT 67 scene benchmark in term of mean Accuracy Precision.
KW - Learning discriminative and shareable patches
KW - deep-learned patch feature
KW - scene classification
UR - http://www.scopus.com/inward/record.url?scp=84973367427&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84973367427&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2016.7471890
DO - 10.1109/ICASSP.2016.7471890
M3 - Conference contribution
AN - SCOPUS:84973367427
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 1317
EP - 1321
BT - 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 41st IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2016
Y2 - 20 March 2016 through 25 March 2016
ER -