CVPR 2023 – OrienterNet: Visual Localization in 2D Public Maps with Neural Matching


In this episode we discuss OrienterNet: Visual Localization in 2D Public Maps with Neural Matching
by Paul-Edouard Sarlin, Daniel DeTone, Tsun-Yi Yang, Armen Avetisyan, Julian Straub, Tomasz Malisiewicz, Samuel Rota Bulo, Richard Newcombe, Peter Kontschieder, Vasileios Balntas. The paper introduces OrienterNet, a deep neural network that can localize an image with sub-meter accuracy using 2D semantic maps, enabling anyone to localize anywhere such maps are available. OrienterNet estimates the location and orientation of a query image by matching a neural Bird’s-Eye View with open and globally available maps from OpenStreetMap. The network is supervised only by camera poses but learns to perform semantic matching with a wide range of map elements in an end-to-end manner. The paper also introduces a large crowd-sourced dataset of images captured across 12 cities from the viewpoints of cars, bikes, and pedestrians to enable the network’s training.


Posted

in

by

Tags: