Waseda meisei at TRECVID 2018: Ad-hoc video search

Kazuya Ueki*, Yu Nakagome, Koji Hirakawa, Kotaro Kikuchi, Yoshihiko Hayashi, Tetsuji Ogawa, Tetsunori Kobayashi

*この研究の対応する著者

研究成果: Paper査読

2 被引用数 (Scopus)

抄録

The Waseda Meisei team participated in the TRECVID 2018 Ad-hoc Video Search (AVS) task [1]. For this year's AVS task, we submitted both manually assisted and fully automatic runs. Our approach focuses on the concept-based video retrieval, based on the same approach as last year. Specifically, it improves on the word-based keyword extraction method presented in last year's system, which could neither handle keywords related to motion nor appropriately capture the meaning of phrases or whole sentences in queries. To deal with these problems, we introduce two new measures: (i) calculating the similarity between the definition of a word and an entire query sentence, (ii) handling of multi-word phrases. Our best manually assisted run achieved a mean average precision (mAP) of 10.6%, which was ranked the highest among all submitted manually assisted runs. Our best fully automatic run achieved an mAP of 6.0%, which ranked sixth among all participants.

本文言語English
出版ステータスPublished - 2020
イベント2018 TREC Video Retrieval Evaluation, TRECVID 2018 - Gaithersburg, United States
継続期間: 2018 11月 132018 11月 15

Conference

Conference2018 TREC Video Retrieval Evaluation, TRECVID 2018
国/地域United States
CityGaithersburg
Period18/11/1318/11/15

ASJC Scopus subject areas

  • 情報システム
  • 信号処理
  • 電子工学および電気工学

引用スタイル