Click here to flash read.
Variational autoencoders (VAEs) are one of the deep generative models that
have experienced enormous success over the past decades. However, in practice,
they suffer from a problem called posterior collapse, which occurs when the
encoder coincides, or collapses, with the prior taking no information from the
latent structure of the input data into consideration. In this work, we
introduce an inverse Lipschitz neural network into the decoder and, based on
this architecture, provide a new method that can control in a simple and clear
manner the degree of posterior collapse for a wide range of VAE models equipped
with a concrete theoretical guarantee. We also illustrate the effectiveness of
our method through several numerical experiments.
No creative common's license