Skip to main content

Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching

Author(s): Zeng, Andy; Song, Shuran; Yu, Kuan-Ting; Donlon, Elliott; Hogan, Francois R; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1t82v
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZeng, Andy-
dc.contributor.authorSong, Shuran-
dc.contributor.authorYu, Kuan-Ting-
dc.contributor.authorDonlon, Elliott-
dc.contributor.authorHogan, Francois R-
dc.contributor.authorBauza, Maria-
dc.contributor.authorMa, Daolin-
dc.contributor.authorTaylor, Orion-
dc.contributor.authorLiu, Melody-
dc.contributor.authorRomo, Eudald-
dc.contributor.authorFazeli, Nima-
dc.contributor.authorAlet, Ferran-
dc.contributor.authorDafle, Nikhil C-
dc.contributor.authorHolladay, Rachel-
dc.contributor.authorMorona, Isabella-
dc.contributor.authorNair, Prem Q-
dc.contributor.authorGreen, Druck-
dc.contributor.authorTaylor, Ian-
dc.contributor.authorLiu, Weber-
dc.contributor.authorFunkhouser, Thomas-
dc.contributor.authorRodriguez, Alberto-
dc.date.accessioned2021-10-08T19:46:33Z-
dc.date.available2021-10-08T19:46:33Z-
dc.date.issued2019en_US
dc.identifier.citationZeng, Andy, Shuran Song, Kuan-Ting Yu, Elliott Donlon, Francois R. Hogan, Maria Bauza, Daolin Ma, Orion Taylor, Melody Liu, Eudald Romo, Nima Fazeli, Ferran Alet, Nikhil C. Dafle, Rachel Holladay, Isabella Morona, Prem Q. Nair, Druck Green, Ian Taylor, Weber Liu, Thomas Funkhouser, and Alberto Rodriguez. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching." In International Journal of Robotics Research (2019): pp. 1-16. doi:10.1177/0278364919868017en_US
dc.identifier.issn0278-3649-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1t82v-
dc.description.abstractThis article presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses an object-agnostic grasping framework to map from visual observations to actions: inferring dense pixel-wise probability maps of the affordances for four different grasping primitive actions. It then executes the action with the highest affordance and recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional data collection or re-training. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT–Princeton Team system that took first place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu/en_US
dc.format.extent1 - 16en_US
dc.language.isoen_USen_US
dc.relation.ispartofInternational Journal of Robotics Researchen_US
dc.rightsFinal published version. This is an open access article.en_US
dc.titleRobotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matchingen_US
dc.typeJournal Articleen_US
dc.identifier.doi10.1177/0278364919868017-
dc.identifier.eissn1741-3176-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
RoboticPickPlaceNovelObjectsJournal.pdf3.1 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.