To refer to this page use:
|Abstract:||The success of generative modeling in continuous domains has led to a surge of interest in generating discrete data such as molecules, source code, and graphs. However, construction histories for these discrete objects are typically not unique and so generative models must reason about intractably large spaces in order to learn. Additionally, structured discrete domains are often characterized by strict constraints on what constitutes a valid object and generative models must respect these requirements in order to produce useful novel samples. Here, we present a generative model for discrete objects employing a Markov chain where transitions are restricted to a set of local operations that preserve validity. Building off of generative interpretations of denoising autoencoders, the Markov chain alternates between producing 1) a sequence of corrupted objects that are valid but not from the data distribution, and 2) a learned reconstruction distribution that attempts to fix the corruptions while also preserving validity. This approach constrains the generative model to only produce valid objects, requires the learner to only discover local modifications to the objects, and avoids marginalization over an unknown and potentially large space of construction histories. We evaluate the proposed approach on two highly structured discrete domains, molecules and Laman graphs, and find that it compares favorably to alternative methods at capturing distributional statistics for a host of semantically relevant metrics.|
|Citation:||Seff, Ari, Zhou, Wenda, Damani, Farhan, Doyle, Abigail, Adams, Ryan P. (2019). Discrete Object Generation with Reversible Inductive Construction. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 32|
|Type of Material:||Journal Article|
|Journal/Proceeding Title:||ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.