Joint equal contribution of global and local features for image annotation

Supheakmungkol Sarin, Wataru Kameyama

研究成果: Conference article査読

抄録

Image annotation is a very important task as the number of photographs has gone sky-high. This paper describes our participation in the ImageCLEF Large Scale Visual Concept Detection and Annotation Task 2009. We present the method used for our best run. Our approach is inspired from a recently proposed method where joint equal contribution (JEC) of simple global color and texture features can outperform the state-of-the-art annotation techniques [10]. Our idea is that if such simple features could do so well, then the combination of higher-level features would do even better. Study has shown that the concurrent use of saliency and gist of the scene is a major trait of human vision system. Therefore, in this preliminary study, we propose to explore the combination of different visual features at global, local and scene levels including global and local color, texture, and gist of the scene. The experiments confirm that higher-level features lead to better performance. Through the experiments, we also found that using 40 nearest neighbors and HSV, HSV (at saliency regions), HAAR, GIST (full scene), GIST (scene at the center) as features produce the best result.We finally identify the weakness in our approach and ways on how the system could be optimized and improved.

本文言語English
ジャーナルCEUR Workshop Proceedings
1175
出版ステータスPublished - 2009 1月 1
イベント2009 Cross Language Evaluation Forum Workshop, CLEF 2009, co-located with the 13th European Conference on Digital Libraries, ECDL 2009 - Corfu, Greece
継続期間: 2009 9月 302009 10月 2

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)

フィンガープリント

「Joint equal contribution of global and local features for image annotation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル