BIGAN is a GAN that uses an auto-encoder as the discriminator. with a simple yet robust architecture, standard training procedure with fast and stable convergence.
In this work, we propose a hierarchical BIGANs architecture, because GAN has a perceptual loss function (THE DISCRIMINATOR). The discriminator itself has binary cross entropy as its loss function. Which cannot take perceptual information in the account. To mitigate the effect of the BCE loss we propose Hierarchical BIGANs.
1. What type of discriminator is best?
a. Can we use CNN as the discriminator?
b. Does the discriminator have to be an auto-encoder?
c. What impact would using other varieties of auto-encoders such Variational Auto Encoders (VAEs) have?
2. What latent space size is best for a dataset and can it be used in a Semi supervised settings?
3. When should noise be added to the input and how much?
4. How can we Measure convergence, control distributional diversity and maintain the equilibrium between the discriminator and the generator.
5. Can Classification be done with images with random dimension/composition?
6. Can symbolic features like edge detection be combined with the original data, so that the performance of translation can be further improved.
7. Can we proposing a paradigm of modular learning which conducts continuous recursive self-improvement with regards to the utility function (reward system). We can view this as second (and higher) order optimization.