Image credit: Unsplash

Variational Bayes for Implicit Probabilistic Models

Recently, there has been immense research interest in the use of implicit probabilistic models in machine learning. Implicit models are an attractive alternative to prescribed models, not least due to its capacity to admit high fidelity to the data generating process.

However, this level of expressiveness places significant analytical burdens on their inference, since they fail to yield tractable probability densities. Although current predominant methods such as variational inference have become indispensable tools for efficient and scalable inference in highly-complex models, their classical approaches are ill-equipped to deal with the intractabilities posed by implicit models.

We are just beginning to understand the implications of learning in implicit models, and appreciate their formal connections to density ratio estimation and approximate divergence minimization. This understanding has helped us to dramatically expand the scope and applicability of variational inference, which can now be applied in settings where no likelihood is available, where the family of posterior approximations is arbitrarily complex, and indeed, even where no prior density is available.

These efforts to generalize have made it feasible to perform approximate Bayesian inference in a vastly richer class of probabilistic models. The aim of this research is further advance these techniques, and to also provide a better theoretical understanding of the newly-proposed approaches.


. Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference. In ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.

Preprint PDF Code Project Poster Slides Workshop Homepage


Contributed Talk: Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference
Jul 14, 2018 3:20 PM