Spherical Inducing Features for Orthogonally-Decoupled Gaussian Processes

A tesseral surface harmonic of the first kind, $Y_{8,3}$


Despite their many desirable properties, Gaussian processes (GPs) are often compared unfavorably to deep neural networks (NNs) for lacking the ability to learn representations. Recent efforts to bridge the gap between GPs and deep NNs have yielded a new class of inter-domain variational GPs in which the inducing variables correspond to hidden units of a feedforward NN. In this work, we examine some practical issues associated with this approach and propose an extension that leverages the orthogonal decomposition of GPs to mitigate these limitations. In particular, we introduce spherical inter-domain features to construct more flexible data-dependent basis functions for both the principal and orthogonal components of the GP approximation and show that incorporating NN activation features under this framework not only alleviates these shortcomings but is more scalable than alternative strategies. Experiments on multiple benchmark datasets demonstrate the effectiveness of our approach.

Proceedings of the 40th International Conference on Machine Learning (ICML2023)
Louis Tiao
Louis Tiao
Machine Learning Researcher (PhD Candidate)

Thanks for stopping by! Let’s connect – drop me a message or follow me