Skip to main content

Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings

Author(s): Chen, Kevin; Choy, Christopher B; Savva, Manolis; Chang, Angel X; Funkhouser, Thomas; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1982p
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Kevin-
dc.contributor.authorChoy, Christopher B-
dc.contributor.authorSavva, Manolis-
dc.contributor.authorChang, Angel X-
dc.contributor.authorFunkhouser, Thomas-
dc.contributor.authorSavarese, Silvio-
dc.date.accessioned2021-10-08T19:46:35Z-
dc.date.available2021-10-08T19:46:35Z-
dc.date.issued2019en_US
dc.identifier.citationChen, Kevin, Christopher B. Choy, Manolis Savva, Angel X. Chang, Thomas Funkhouser, and Silvio Savarese. "Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings." In Asian Conference on Computer Vision (2019): pp. 100-116. doi:10.1007/978-3-030-20893-6_7en_US
dc.identifier.issn0302-9743-
dc.identifier.urihttps://arxiv.org/pdf/1803.08495.pdf-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1982p-
dc.description.abstractWe present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail.en_US
dc.format.extent100 - 116en_US
dc.language.isoen_USen_US
dc.relation.ispartofAsian Conference on Computer Visionen_US
dc.rightsAuthor's manuscripten_US
dc.titleText2Shape: Generating Shapes from Natural Language by Learning Joint Embeddingsen_US
dc.typeConference Articleen_US
dc.identifier.doi10.1007/978-3-030-20893-6_7-
dc.identifier.eissn1611-3349-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
Text2ShapeNaturalLanguageJointEmbeddings.pdf14.69 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.