Skip to main content

Semantic alignment of LiDAR data at city scale

Author(s): Yu, Fisher; Xiao, Jianxiong; Funkhouser, Thomas

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1pg2r
Abstract: This paper describes an automatic algorithm for global alignment of LiDAR data collected with Google Street View cars in urban environments. The problem is challenging because global pose estimation techniques (GPS) do not work well in city environments with tall buildings, and local tracking techniques (integration of inertial sensors, structure-from-motion, etc.) provide solutions that drift over long ranges, leading to solutions where data collected over wide ranges is warped and misaligned by many meters. Our approach to address this problem is to extract “semantic features” with object detectors (e.g., for facades, poles, cars, etc.) that can be matched robustly at different scales, and thus are selected for different iterations of an ICP algorithm. We have implemented an all-to-all, non-rigid, global alignment based on this idea that provides better results than alternatives during experiments with data from large regions of New York, San Francisco, Paris, and Rome.
Publication Date: 2015
Citation: Yu, Fisher, Jianxiong Xiao, and Thomas Funkhouser. "Semantic alignment of LiDAR data at city scale." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2015): pp. 1722-1731. doi:10.1109/CVPR.2015.7298781
DOI: 10.1109/CVPR.2015.7298781
ISSN: 1063-6919
EISSN: 1063-6919
Pages: 1722 - 1731
Type of Material: Conference Article
Journal/Proceeding Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.