Photo by rawpixel on Unsplash

Variational Bayes for Implicit Probabilistic Models

Photo by rawpixel on Unsplash

Variational Bayes for Implicit Probabilistic Models

Recently, there has been immense research interest in the use of implicit probabilistic models in machine learning. Implicit models are an attractive alternative to prescribed models, not least due to its capacity to admit high fidelity to the data generating process.

However, this level of expressiveness places significant analytical burdens on their inference, since they fail to yield tractable probability densities. Although current predominant methods such as variational inference have become indispensable tools for efficient and scalable inference in highly-complex models, their classical approaches are ill-equipped to deal with the intractabilities posed by implicit models.

We are just beginning to understand the implications of learning in implicit models, and appreciate their formal connections to density ratio estimation and approximate divergence minimization. This understanding has helped us to dramatically expand the scope and applicability of variational inference, which can now be applied in settings where no likelihood is available, where the family of posterior approximations is arbitrarily complex, and indeed, even where no prior density is available.

These efforts to generalize have made it feasible to perform approximate Bayesian inference in a vastly richer class of probabilistic models. The aim of this research is further advance these techniques, and to also provide a better theoretical understanding of the newly-proposed approaches.

Avatar
Louis C. Tiao
Ph.D. Student in Machine Learning

Posts

This post demonstrates how to approximate the KL divergence (in fact, any f-divergence) between implicit distributions, using density …

Publications

We formalize the problem of learning interdomain correspondences in the absence of paired data as Bayesian inference in a latent …

Talks