https://arxiv.org/pdf/1706.08224v1.pdfSanjeev Arora and Yi Zhan
Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational
paper of (Goodfellow et al 2014) suggested they do, if they were given “sufficiently
large” deep nets, sample size, and computation time. A recent theoretical analysis in Arora
et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has
finite size. It showed that the training objective can approach its optimum value even if the
generated distribution has very low support —in other words, the training objective is unable
to prevent mode collapse.
The current note reports experiments suggesting that such problems are not merely theoretical.
It presents empirical evidence that well-known GANs approaches do learn distributions
of fairly low support, and thus presumably are not learning the target distribution. The main
technical contribution is a new proposed test, based upon the famous birthday paradox, for
estimating the support size of the generated distribution.