Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model

Omnidirectional depth perception is essential for mobile robotics applications that require scene understanding across a full 360° field of view. Camera-based setups offer a cost-effective option by using stereo depth estimation to generate dense, high-resolution depth maps without relying on expensive active sensing. However, existing omnidirectional stereo matching approaches achieve only limited depth accuracy across diverse environments, depth ranges, and lighting conditions, due to the scarcity of real-world data. We present DFI-OmniStereo, a novel omnidirectional stereo matching method that leverages a large-scale pre-trained foundation model for relative monocular depth estimation within an iterative optimization-based stereo matching architecture. We introduce a dedicated two-stage training strategy to utilize the relative monocular depth features for our omnidirectional stereo matching before scale-invariant fine-tuning. DFI-OmniStereo achieves state-of-the-art results on the real-world Helvipad dataset, reducing disparity MAE by approximately 16% compared to the previous best omnidirectional stereo method.

Identifier
Source https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4557
Metadata Access https://tudatalib.ulb.tu-darmstadt.de/oai/openairedata?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:tudatalib.ulb.tu-darmstadt.de:tudatalib/4557
Provenance
Creator Endres, Jannik; Hahn, Oliver; Corbière, Charles; Schaub-Meyer, Simone; Roth, Stefan; Alahi, Alexandre
Publisher TU Darmstadt
Contributor European Commission; TU Darmstadt
Publication Year 2025
Funding Reference European Commission info:eu-repo/grantAgreement/EC/H2020/866008
Rights Apache License 2.0; info:eu-repo/semantics/openAccess
OpenAccess true
Contact https://tudatalib.ulb.tu-darmstadt.de/page/contact
Representation
Language English
Resource Type Software
Format application/zip; application/octet-stream
Discipline Other