CVPR 2023 – Modality-invariant Visual Odometry for Embodied Vision


In this episode we discuss Modality-invariant Visual Odometry for Embodied Vision
by Marius Memmel1*, Roman Bachmann2, and Amir Zamir2 are the authors of the paper titled “Modality-invariant Visual Odometry for Embodied Vision”.. This paper proposes a modality-invariant approach to visual odometry (VO) for embodied vision, which is important for effective localization in noisy environments. The proposed Transformer-based approach can handle diverse or changing sensor suites of navigation agents and outperforms previous methods. It can also be extended to learn from multiple arbitrary input modalities, such as surface normals, point clouds, or internal measurements for flexible and learned VO models.


Posted

in

by

Tags: