Novel scene generation, merging and stitching views using the 2D affine space

Kuntal Sengupta*, Jun Ohya

*この研究の対応する著者

研究成果: Article査読

2 被引用数 (Scopus)

抄録

In this paper we present a unified theoretical framework for novel scene synthesis, merging real and virtual worlds, and view stitching. To start with, we have a set of real images from weakly calibrated cameras, for which we compute the dense point match correspondences. For applications like novel view synthesis, one may first solve the 3D scene reconstruction problem, followed by a view rendering process. However, errors in 3D scene reconstruction usually gets reflected in the quality of the new scene generated, so we seek a more direct method. In this paper, we use the knowledge of dense point matches and their affine coordinate values to estimate the corresponding affine coordinate values in the new scene. Our technique of reprojection is extended for other applications like merging real and synthetic worlds, and view stitching.

本文言語English
ページ(範囲)39-53
ページ数15
ジャーナルSignal Processing: Image Communication
14
1-2
出版ステータスPublished - 1998 11月 6
外部発表はい

ASJC Scopus subject areas

  • ソフトウェア
  • 信号処理
  • コンピュータ ビジョンおよびパターン認識
  • 電子工学および電気工学

フィンガープリント

「Novel scene generation, merging and stitching views using the 2D affine space」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル