Skip to main content

Generalization and equilibrium in generative adversarial nets (GANs)

Author(s): Arora, Sanjeev; Ge, R; Liang, Y; Ma, T; Zhang, Y

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1mb14
Abstract: Generalization is defined training of generative adversarial network (GAN), and it's shown that generalization is not guaranteed for the popular distances between distributions such as Jensen-Shannon or Wasserstein. In particular, training may appear to be successful and yet the trained distribution may be arbitrarily far from the target distribution in standard metrics. It is shown that generalization does occur for a much weaker metric we call neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator/generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. Finally, the above theoretical ideas suggest a new training protocol, mix+GAN, which can be combined with any existing method, and empirically is found to improves some existing GAN protocols out of the box.
Publication Date: 2017
Citation: Arora, S, Ge, R, Liang, Y, Ma, T, Zhang, Y. (2017). Generalization and equilibrium in generative adversarial nets (GANs). 1 (322 - 349
Pages: 322 - 349
Type of Material: Conference Article
Journal/Proceeding Title: 34th International Conference on Machine Learning, ICML 2017
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.