Skip to main content

DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving

Author(s): Chen, Chenyi; Seff, Ari; Kornhauser, Alain; Xiao, Jianxiong

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1355q
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Chenyi-
dc.contributor.authorSeff, Ari-
dc.contributor.authorKornhauser, Alain-
dc.contributor.authorXiao, Jianxiong-
dc.date.accessioned2021-10-08T19:48:58Z-
dc.date.available2021-10-08T19:48:58Z-
dc.date.issued2015en_US
dc.identifier.citationChen, Chenyi, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. "DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving." In IEEE International Conference on Computer Vision (ICCV) (2015): pp. 2722-2730. doi:10.1109/ICCV.2015.312en_US
dc.identifier.urihttps://openaccess.thecvf.com/content_iccv_2015/papers/Chen_DeepDriving_Learning_Affordance_ICCV_2015_paper.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1355q-
dc.description.abstractToday, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.en_US
dc.format.extent2722 - 2730en_US
dc.language.isoen_USen_US
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Visionen_US
dc.rightsAuthor's manuscripten_US
dc.titleDeepDriving: Learning Affordance for Direct Perception in Autonomous Drivingen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1109/ICCV.2015.312-
dc.identifier.eissn2380-7504-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
DeepDrivingAffordanceDirect.pdf1.8 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.