TY - GEN
T1 - 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
AU - Feng, Qi
AU - Shum, Hubert P.H.
AU - Morishima, Shigeo
N1 - Funding Information:
This research was supported by JST-Mirai Program (JPMJMI19B2), JSPS KAKENHI (19H01129, 19H04137, 21H0504) and the Royal Society (IES\R2\181024).
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Single-view depth estimation from omnidirectional images has gained popularity with its wide range of applications such as autonomous driving and scene reconstruction. Although data-driven learning-based methods demonstrate significant potential in this field, scarce training data and ineffective 360 estimation algorithms are still two key limitations hindering accurate estimation across diverse domains. In this work, we first establish a large-scale dataset with varied settings called Depth360 to tackle the training data problem. This is achieved by exploring the use of a plenteous source of data, 360 videos from the internet, using a test-time training method that leverages unique information in each omnidirectional sequence. With novel geometric and temporal constraints, our method generates consistent and convincing depth samples to facilitate single-view estimation. We then propose an end-to-end two-branch multi-task learning network, SegFuse, that mimics the human eye to effectively learn from the dataset and estimate high-quality depth maps from diverse monocular RGB images. With a peripheral branch that uses equirectangular projection for depth estimation and a foveal branch that uses cubemap projection for semantic segmentation, our method predicts consistent global depth while maintaining sharp details at local regions. Experimental results show favorable performance against the state-of-the-art methods.
AB - Single-view depth estimation from omnidirectional images has gained popularity with its wide range of applications such as autonomous driving and scene reconstruction. Although data-driven learning-based methods demonstrate significant potential in this field, scarce training data and ineffective 360 estimation algorithms are still two key limitations hindering accurate estimation across diverse domains. In this work, we first establish a large-scale dataset with varied settings called Depth360 to tackle the training data problem. This is achieved by exploring the use of a plenteous source of data, 360 videos from the internet, using a test-time training method that leverages unique information in each omnidirectional sequence. With novel geometric and temporal constraints, our method generates consistent and convincing depth samples to facilitate single-view estimation. We then propose an end-to-end two-branch multi-task learning network, SegFuse, that mimics the human eye to effectively learn from the dataset and estimate high-quality depth maps from diverse monocular RGB images. With a peripheral branch that uses equirectangular projection for depth estimation and a foveal branch that uses cubemap projection for semantic segmentation, our method predicts consistent global depth while maintaining sharp details at local regions. Experimental results show favorable performance against the state-of-the-art methods.
KW - Artificial intelligence
KW - Computer graphics
KW - Computer vision
KW - Computing methodologies
KW - Image manipulation
KW - Image-based rendering
KW - Reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85129398811&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129398811&partnerID=8YFLogxK
U2 - 10.1109/VR51125.2022.00087
DO - 10.1109/VR51125.2022.00087
M3 - Conference contribution
AN - SCOPUS:85129398811
T3 - Proceedings - 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2022
SP - 664
EP - 673
BT - Proceedings - 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 29th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2022
Y2 - 12 March 2022 through 16 March 2022
ER -