Waseda meisei at TRECVID 2018: Ad-hoc video search

Kazuya Ueki*, Yu Nakagome, Koji Hirakawa, Kotaro Kikuchi, Yoshihiko Hayashi, Tetsuji Ogawa, Tetsunori Kobayashi

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)

Abstract

The Waseda Meisei team participated in the TRECVID 2018 Ad-hoc Video Search (AVS) task [1]. For this year's AVS task, we submitted both manually assisted and fully automatic runs. Our approach focuses on the concept-based video retrieval, based on the same approach as last year. Specifically, it improves on the word-based keyword extraction method presented in last year's system, which could neither handle keywords related to motion nor appropriately capture the meaning of phrases or whole sentences in queries. To deal with these problems, we introduce two new measures: (i) calculating the similarity between the definition of a word and an entire query sentence, (ii) handling of multi-word phrases. Our best manually assisted run achieved a mean average precision (mAP) of 10.6%, which was ranked the highest among all submitted manually assisted runs. Our best fully automatic run achieved an mAP of 6.0%, which ranked sixth among all participants.

Original languageEnglish
Publication statusPublished - 2020
Event2018 TREC Video Retrieval Evaluation, TRECVID 2018 - Gaithersburg, United States
Duration: 2018 Nov 132018 Nov 15

Conference

Conference2018 TREC Video Retrieval Evaluation, TRECVID 2018
Country/TerritoryUnited States
CityGaithersburg
Period18/11/1318/11/15

ASJC Scopus subject areas

  • Information Systems
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this