Skip to main content

Discrete Object Generation with Reversible Inductive Construction

Author(s): Seff, Ari; Zhou, Wenda; Damani, Farhan; Doyle, Abigail; Adams, Ryan P

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1q533
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSeff, Ari-
dc.contributor.authorZhou, Wenda-
dc.contributor.authorDamani, Farhan-
dc.contributor.authorDoyle, Abigail-
dc.contributor.authorAdams, Ryan P-
dc.date.accessioned2021-10-08T19:47:00Z-
dc.date.available2021-10-08T19:47:00Z-
dc.date.issued2019en_US
dc.identifier.citationSeff, Ari, Zhou, Wenda, Damani, Farhan, Doyle, Abigail, Adams, Ryan P. (2019). Discrete Object Generation with Reversible Inductive Construction. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 32en_US
dc.identifier.issn1049-5258-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1q533-
dc.description.abstractThe success of generative modeling in continuous domains has led to a surge of interest in generating discrete data such as molecules, source code, and graphs. However, construction histories for these discrete objects are typically not unique and so generative models must reason about intractably large spaces in order to learn. Additionally, structured discrete domains are often characterized by strict constraints on what constitutes a valid object and generative models must respect these requirements in order to produce useful novel samples. Here, we present a generative model for discrete objects employing a Markov chain where transitions are restricted to a set of local operations that preserve validity. Building off of generative interpretations of denoising autoencoders, the Markov chain alternates between producing 1) a sequence of corrupted objects that are valid but not from the data distribution, and 2) a learned reconstruction distribution that attempts to fix the corruptions while also preserving validity. This approach constrains the generative model to only produce valid objects, requires the learner to only discover local modifications to the objects, and avoids marginalization over an unknown and potentially large space of construction histories. We evaluate the proposed approach on two highly structured discrete domains, molecules and Laman graphs, and find that it compares favorably to alternative methods at capturing distributional statistics for a host of semantically relevant metrics.en_US
dc.language.isoen_USen_US
dc.relation.ispartofADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)en_US
dc.rightsAuthor's manuscripten_US
dc.titleDiscrete Object Generation with Reversible Inductive Constructionen_US
dc.typeJournal Articleen_US
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/conference-proceedingen_US

Files in This Item:
File Description SizeFormat 
ObjectGenerationReversibleInductiveConstruct.pdf713.87 kBAdobe PDFView/Download
Discrete Object Generation with Reversible Inductive Construction.pdf896.8 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.