TY - CONF
T1 - Waseda meisei at TRECVID 2018
T2 - 2018 TREC Video Retrieval Evaluation, TRECVID 2018
AU - Ueki, Kazuya
AU - Nakagome, Yu
AU - Hirakawa, Koji
AU - Kikuchi, Kotaro
AU - Hayashi, Yoshihiko
AU - Ogawa, Tetsuji
AU - Kobayashi, Tetsunori
N1 - Funding Information:
This work was partially supported by JSPS KAKENHI Grant Number 15K00249, 17H01831, and 18K11362, Kayamori Foundation of Informational Science Advancement, and Telecommunications Advancement Foundation.
Publisher Copyright:
Copyright © TRECVID 2018.All rights reserved.
PY - 2020
Y1 - 2020
N2 - The Waseda Meisei team participated in the TRECVID 2018 Ad-hoc Video Search (AVS) task [1]. For this year's AVS task, we submitted both manually assisted and fully automatic runs. Our approach focuses on the concept-based video retrieval, based on the same approach as last year. Specifically, it improves on the word-based keyword extraction method presented in last year's system, which could neither handle keywords related to motion nor appropriately capture the meaning of phrases or whole sentences in queries. To deal with these problems, we introduce two new measures: (i) calculating the similarity between the definition of a word and an entire query sentence, (ii) handling of multi-word phrases. Our best manually assisted run achieved a mean average precision (mAP) of 10.6%, which was ranked the highest among all submitted manually assisted runs. Our best fully automatic run achieved an mAP of 6.0%, which ranked sixth among all participants.
AB - The Waseda Meisei team participated in the TRECVID 2018 Ad-hoc Video Search (AVS) task [1]. For this year's AVS task, we submitted both manually assisted and fully automatic runs. Our approach focuses on the concept-based video retrieval, based on the same approach as last year. Specifically, it improves on the word-based keyword extraction method presented in last year's system, which could neither handle keywords related to motion nor appropriately capture the meaning of phrases or whole sentences in queries. To deal with these problems, we introduce two new measures: (i) calculating the similarity between the definition of a word and an entire query sentence, (ii) handling of multi-word phrases. Our best manually assisted run achieved a mean average precision (mAP) of 10.6%, which was ranked the highest among all submitted manually assisted runs. Our best fully automatic run achieved an mAP of 6.0%, which ranked sixth among all participants.
UR - http://www.scopus.com/inward/record.url?scp=85084956170&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084956170&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85084956170
Y2 - 13 November 2018 through 15 November 2018
ER -