📄 One paper accepted to ICML 2026
Our paper “Empirical Gaussian Processes” was accepted to ICML 2026.
PhD Computer Science
University of Sydney
BSc (Honours Class 1) Computer Science
University of New South Wales (UNSW) Sydney
My research is in probabilistic machine learning, with particular focus on approximate Bayesian inference and Gaussian processes, and their applications to Bayesian optimization. More broadly, my interests extend to automated machine learning (AutoML), encompassing hyperparameter optimization and adaptive resource allocation techniques such as early stopping and scaling laws. Past work includes graph representation learning and deep generative models. Some of this work has appeared as Orals and Spotlights at NeurIPS and ICML.
Always happy to hear from people working on related problems — get in touch.
Our paper “Empirical Gaussian Processes” was accepted to ICML 2026.
Our paper “Ax — A Platform for Adaptive Experimentation” was accepted to AutoML 2025 (ABCD Track).
Started as a Research Scientist at Meta on the Adaptive Experimentation team within Central Applied Science (CAS), based in New York City.
Submitted my PhD thesis, Probabilistic Machine Learning in the Age of Deep Learning, at the University of Sydney.
Our paper “Spherical Inducing Features for Orthogonally-Decoupled Gaussian Processes” was accepted to ICML2023 as an Oral Presentation!
We study Empirical GPs, a principled framework for constructing flexible, data-driven Gaussian process priors. By estimating mean and covariance directly from a corpus of …
We present Ax, an open-source platform for adaptive experimentation built on BoTorch. Off the shelf, Ax achieves state-of-the-art performance across a wide range of synthetic and …
We introduce spherical inter-domain inducing features that yield more flexible, data-dependent basis functions for orthogonally-decoupled GP approximations, narrowing the …
We reformulate the computation of the acquisition function in Bayesian optimization (BO) as a probabilistic classification problem, providing advantages in scalability, …
We give a short and practical guide to efficiently computing the Cholesky decomposition of matrices perturbed by low-rank updates.
We use one weird trick — Pólya-Gamma augmentation — to make exact inference in Bayesian logistic regression tractable.
We give a short illustrated reference guide to the Knowledge Gradient acquisition function with an implementation from scratch in TensorFlow Probability.
We summarize the notation, identities, and derivations underlying the sparse variational Gaussian process (SVGP) framework.
We show how to approximate the KL divergence (in fact, any f-divergence) between implicit distributions using density ratio estimation by probabilistic classification.
We illustrate how to build complicated probability distributions in a modular fashion using the Bijector API from TensorFlow Probability.
The 38th International Conference on Machine Learning (ICML 2021), virtual.
ELLIS AutoML Seminars (virtual).
NeurIPS 2020 4th Workshop on Meta-Learning (virtual).
Amazon Machine Learning Community Tech Talk, Berlin.
ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models (TAGDM), Stockholm.