Select Publications

We formalize the problem of learning interdomain correspondences in the absence of paired data as Bayesian inference in a latent variable model (LVM), where one seeks the underlying hidden representations of entities from one domain as entities from the other domain. First, we introduce implicit latent variable models, where the prior over hidden representations can be specified flexibly as an implicit distribution. Next, we develop a new variational inference (VI) algorithm for this model based on minimization of the symmetric Kullback-Leibler (KL) divergence between a variational joint and the exact joint distribution. Lastly, we demonstrate that the state-of-the-art cycle-consistent adversarial learning (CYCLEGAN) models can be derived as a special case within our proposed VI framework, thus establishing its connection to approximate Bayesian inference methods.
In ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018

Recent Publications

. Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference. In ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.

Preprint PDF Code Project Poster Slides Workshop Homepage

Recent & Upcoming Talks

Contributed Talk: Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference
Jul 14, 2018 3:20 PM

Recent Posts

We illustrate how to build complicated probability distributions in a modular fashion using the Bijector API from TensorFlow Probability.

CONTINUE READING

An in-depth practical guide to variational encoders from a probabilistic perspective.

CONTINUE READING

The meshgrid function is useful for creating coordinate arrays to vectorize function evaluations over a grid. Experienced NumPy users will have noticed some discrepancy between meshgrid and the mgrid, a function that is used just as often, for exactly the same purpose. What is the discrepancy, and why does a discrepancy even exist when “there should be one - and preferably only one - obvious way to do it.” 1

CONTINUE READING

Projects

Aboleth

Aboleth is a minimalistic TensorFlow framework for scalable Bayesian deep learning and Gaussian process approximation.

Determinant

Determinant is a software service that makes predictions from sparse data, and learns what data it needs to optimise its performance.

Variational Bayes for Implicit Probabilistic Models

Primary PhD research topic: Expanding the scope and applicability of variational inference to encompass implicit probabilistic models.

Revrand

Revrand is a full-featured Python library for Bayesian generalized linear models, with random basis kernels for large-scale Gaussian process approximations.

Teaching

I am a teaching assistant (TA) for the following courses:

COMP9418: Advanced Topics in Statistical Machine Learning (UNSW Sydney)

The course has a primary focus on probabilistic machine learning methods, covering the topics of exact and approximate inference in directed and undirected probabilistic graphical models - continuous latent variable models, structured prediction models, and non-parametric models based on Gaussian processes.

Lab exercise on Gaussian Process Regression, running in JupyterLab.

Lab exercise on Gaussian Process Regression, running in JupyterLab.

This course has a major emphasis on maintaining a good balance between theory and practice. My primary responsibility was to create lab exercises that aid students in gaining hands-on experience with these methods, specifically applying them to real-world data using the most current tools and libraries. The labs were Python-based, and relied heavily on the Python scientific computing and data analysis stack (NumPy, SciPy, Matplotlib, Seaborn, Pandas, IPython/Jupyter notebooks), and the popular machine learning libraries scikit-learn and TensorFlow.

Students were given the chance to experiment with a broad range of methods on various problems, such as Markov chain Monte Carlo (MCMC) for Bayesian logistic regression, probabilistic PCA (PPCA), factor analysis (FA) and independent component analysis (ICA) for dimensionality reduction, hidden Markov models (HMMs) for speech recognition, conditional random fields (CRFs) for named-entity recognition, and Gaussian processes (GPs) for regression and classification.

Contact

  • louistiao@gmail.com
  • Level 5, School of IT. Building J12, 45 Cleveland St.
    University of Sydney, NSW 2006, Australia
  • Tuesday 15:00 to 16:00 or email for appointment