M.S. Thesis

Abstract: Generative adversarial networks (GANs) [1] are deep neural networks that are designed to model complex data distributions. The idea is to create a discriminator network that learns the borders of the data distribution and a generator network trained to maximize the discriminator’s loss to learn to generate samples from the data distribution. Instead of learning a global generator, one variant trains multiple generators, each responsible from one local mode of the data distribution. In this thesis, we review such approaches and propose the hierarchical mixture of generators that learns a hierarchical division in a tree structure as well as local generators in the leaves. Since these generators are combined softly, the whole model is continuous and can be trained using gradient-based optimization. Our experiments on five image data sets, namely, MNIST, FashionMNIST, CelebA, UTZap50K, and Oxford Flowers, show that our proposed model is as successful as the fully connected neural network. The learned hierarchical structure also allows for knowledge extraction.

TL, DR: We incorporate multiple generators softly to 1. increase performance, 2. get an interpretable model. It did give better performance with an interpretable model.


Hierarchical mixtures of generators (HMoG)

Below is an example where we compare our method with other related works on a 2-dimensional toy data set.

Related Works

Ours


Some generated samples and the tree activations are shown. These activations show which paths are used throughout the generation process. Here, leaves of the tree correspond to a generator. If each generator is localized, we would expect these activations to be different for different samples and similar for similar samples. The tree hierarchy allows us to examine each generation process by looking at path activations.


Conclusion: This is a complementary method which can be augmented to any GAN structure with any arbitrary loss function. For example in this thesis, we removed the fully-connected layer of DCGAN [2] and use hierarchical mixtures of experts [3]. This will allow us to increase performance by reducing mode dropping. Also, the learned model can be examined to understand the generator hierarchy which helps us to learn the discriminative features of the data.

code, full text, shorter arxiv version (accepted to ICPR 2020)

References

[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative adversarial nets”. NIPS 27. 2014.
[2] Alec Radford, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks”. arXiv:1511.06434.
[3] Michael I. Jordan and Robert A. Jacobs. “Hierarchical mixtures of experts and the EM algorithm”. Neural Computation, 6(2). 1994.