Skip to main content

Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather

Author(s): Bijelic, Mario; Gruber, Tobias; Mannan, Fahim; Kraus, Florian; Ritter, Werner; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1b24k
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBijelic, Mario-
dc.contributor.authorGruber, Tobias-
dc.contributor.authorMannan, Fahim-
dc.contributor.authorKraus, Florian-
dc.contributor.authorRitter, Werner-
dc.contributor.authorDietmayer, Klaus-
dc.contributor.authorHeide, Felix-
dc.date.accessioned2021-10-08T19:46:46Z-
dc.date.available2021-10-08T19:46:46Z-
dc.date.issued2020en_US
dc.identifier.citationBijelic, Mario, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, and Felix Heide. "Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020): pp. 11679-11689. doi: 10.1109/CVPR42600.2020.01170en_US
dc.identifier.issn1063-6919-
dc.identifier.urihttps://openaccess.thecvf.com/content_CVPR_2020/papers/Bijelic_Seeing_Through_Fog_Without_Seeing_Fog_Deep_Multimodal_Sensor_Fusion_CVPR_2020_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1b24k-
dc.description.abstractThe fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information in good environmental conditions, they fail in adverse weather where the sensory streams can be asymmetrically distorted. These rare ``edge-case'' scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this challenge we present a novel multimodal dataset acquired in over 10,000~km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar, and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy. We validate the proposed method, trained on clean data, on our extensive validation dataset. Code and data are available here https://github.com/princeton-computational-imaging/SeeingThroughFog.en_US
dc.format.extent11679 - 11689en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE/CVF Conference on Computer Vision and Pattern Recognitionen_US
dc.rightsAuthor's manuscripten_US
dc.titleSeeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weatheren_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/CVPR42600.2020.01170-
dc.identifier.eissn2575-7075-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
SeeingThroughFogWithoutSeeingFog.pdf10.72 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.