抄録
Lifelog is a set of continuously captured data records of our daily activities. The lifelog in this study includes several types of media data/information acquired from wearable multi sensors which capture video images, individual's body motions, biological information, location information, and so on. We propose an integrated technique to process the lifelog which is composed of both captured video (called lifelog images) and other sensed data. Our proposed technique is based on two models; i.e., the space-oriented model and the action-oriented model. By using the two modeling techniques, we can analyze the lifelog images to find representative images in video scenes using both the pictorial visual features and the individual's context information, and likewise represent the individual's life experiences in some semantic and structured ways for future efficient retrievals and exploitations. The resulting structured lifelog images were evaluated using the vision-based only approach and the proposed technique. Our proposed integrated technique exhibited better results.
本文言語 | English |
---|---|
ホスト出版物のタイトル | Proceedings - 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008 |
ページ | 160-163 |
ページ数 | 4 |
DOI | |
出版ステータス | Published - 2008 |
イベント | 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008 - Busan 継続期間: 2008 4月 24 → 2008 4月 26 |
Other
Other | 2008 International Conference on Multimedia and Ubiquitous Engineering, MUE 2008 |
---|---|
City | Busan |
Period | 08/4/24 → 08/4/26 |
ASJC Scopus subject areas
- コンピュータ グラフィックスおよびコンピュータ支援設計
- コンピュータ サイエンスの応用
- ソフトウェア