Waseda meisei at TRECVID 2017: Ad-hoc video search

Kazuya Ueki*, Koji Hirakawa, Kotaro Kikuchi, Tetsuji Ogawa, Tetsunori Kobayashi

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

6 Citations (Scopus)

Abstract

The Waseda Meisei team participated in the TRECVID 2017 Ad-hoc Video Search (AVS) task [1]. For this year’s AVS task, we submitted both manually assisted and fully automatic runs. Our approach used the following processing steps: building a large semantic concept bank using pre-trained convolutional neural networks (CNNs) and support vector machines (SVMs), calculating each concept score for all test videos (IACC 3), manually or automatically extracting several search keywords based on the given query phrases, and combining the semantic concept scores to obtain the final search result. Our best manually assisted run achieved a mean average precision (mAP) of 21.6%, which ranked the highest among all the submitted runs. Our best fully automatic run achieved a mAP of 15.9%, which ranked second among all participants.

Original languageEnglish
Publication statusPublished - 2017
Event2017 TREC Video Retrieval Evaluation, TRECVID 2017 - Gaithersburg, United States
Duration: 2017 Nov 132017 Nov 15

Conference

Conference2017 TREC Video Retrieval Evaluation, TRECVID 2017
Country/TerritoryUnited States
CityGaithersburg
Period17/11/1317/11/15

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Waseda meisei at TRECVID 2017: Ad-hoc video search'. Together they form a unique fingerprint.

Cite this