<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Bayesian Optimization |</title><link>https://tiao.io/tags/bayesian-optimization/</link><atom:link href="https://tiao.io/tags/bayesian-optimization/index.xml" rel="self" type="application/rss+xml"/><description>Bayesian Optimization</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 01 May 2026 00:00:00 +0000</lastBuildDate><item><title>📄 One paper accepted to ICML 2026</title><link>https://tiao.io/posts/one-paper-accepted-to-icml2026/</link><pubDate>Fri, 01 May 2026 00:00:00 +0000</pubDate><guid>https://tiao.io/posts/one-paper-accepted-to-icml2026/</guid><description>&lt;p&gt;Our paper
was accepted to ICML 2026. This is
joint work with Jihao Andreas Lin and Sebastian Ament (co-first authors), and
David Eriksson, Maximilian Balandat, and Eytan Bakshy.&lt;/p&gt;</description></item><item><title>Empirical Gaussian Processes</title><link>https://tiao.io/publications/empirical-gaussian-processes/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/empirical-gaussian-processes/</guid><description/></item><item><title>Ax: A Platform for Adaptive Experimentation</title><link>https://tiao.io/publications/ax-platform/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/ax-platform/</guid><description/></item><item><title>📄 One paper accepted to AutoML 2025</title><link>https://tiao.io/posts/one-paper-accepted-to-automl2025/</link><pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate><guid>https://tiao.io/posts/one-paper-accepted-to-automl2025/</guid><description>&lt;p&gt;Our paper
was accepted to the
4th International Conference on Automated Machine Learning (AutoML 2025) in
the ABCD Track. This is joint work with the
at
Meta CAS, with Miles Olson, Elizabeth Santorella, Sait Cakmak, and me as
co-first authors.&lt;/p&gt;</description></item><item><title>💼 Joined Meta CAS Adaptive Experimentation</title><link>https://tiao.io/posts/joined-meta-cas/</link><pubDate>Mon, 05 Aug 2024 00:00:00 +0000</pubDate><guid>https://tiao.io/posts/joined-meta-cas/</guid><description>&lt;p&gt;Started as a Research Scientist at Meta on the Adaptive Experimentation (AE)
team within Central Applied Science (CAS), based in New York City. The team
develops and maintains the open-source
and
frameworks for Bayesian optimization and
adaptive experimentation at scale.&lt;/p&gt;</description></item><item><title>Ax</title><link>https://tiao.io/projects/ax/</link><pubDate>Thu, 01 Aug 2024 00:00:00 +0000</pubDate><guid>https://tiao.io/projects/ax/</guid><description>&lt;p&gt;
is an open-source platform for
developed by Meta&amp;rsquo;s Adaptive
Experimentation team. It provides a unified interface for
,
, multi-objective, and
constrained optimization, built on top of
.&lt;/p&gt;
&lt;p&gt;I contribute to Ax as part of my work at Meta, with a particular focus on
sample-efficient methods for
, capacity management, and
scaling-law-based modeling. Co-first author on
(AutoML 2025).&lt;/p&gt;</description></item><item><title>🎓 PhD thesis completed</title><link>https://tiao.io/posts/phd-thesis-completed/</link><pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate><guid>https://tiao.io/posts/phd-thesis-completed/</guid><description>&lt;p&gt;Submitted my PhD thesis, &lt;em&gt;Probabilistic Machine Learning in the Age of Deep
Learning: New Perspectives for Gaussian Processes, Bayesian Optimization and
Beyond&lt;/em&gt;, at the University of Sydney. Supervised by Fabio Ramos and Edwin
Bonilla. The full text and chapter PDFs are available
.&lt;/p&gt;</description></item><item><title>Probabilistic Machine Learning in the Age of Deep Learning: New Perspectives for Gaussian Processes, Bayesian Optimization and Beyond (PhD Thesis)</title><link>https://tiao.io/publications/phd-thesis/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/phd-thesis/</guid><description>&lt;p&gt;The full text is available as a single PDF file &lt;a href="phd-thesis-louis-tiao.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can also find a list of contents and PDFs corresponding to each individual chapter below:&lt;/p&gt;
&lt;h3 id="table-of-contents"&gt;Table of Contents&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Chapter 1: Introduction &lt;a href="contents/1 Introduction.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chapter 2: Background &lt;a href="contents/2 Background.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chapter 3: Orthogonally-Decoupled Sparse Gaussian Processes with Spherical Neural Network Activation Features &lt;a href="contents/3 Orthogonally-Decoupled Sparse Gaussian Processes with Spherical Neural Network Activation Features.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chapter 4: Cycle-Consistent Generative Adversarial Networks as a Bayesian Approximation &lt;a href="contents/4 Cycle-Consistent Generative Adversarial Networks as a Bayesian Approximation.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chapter 5: Bayesian Optimisation by Classification with Deep Learning and Beyond &lt;a href="contents/5 Bayesian Optimisation by Classification with Deep Learning and Beyond.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Chapter 6: Conclusion &lt;a href="contents/6 Conclusion.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Appendix A: Numerical Methods for Improved Decoupled Sampling of Gaussian Processes &lt;a href="contents/A Numerical Methods for Improved Decoupled Sampling of Gaussian Processes.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Bibliography &lt;a href="contents/Bibliography.pdf" target="_blank" rel="noopener"&gt;
&lt;span class="inline-block pr-1"&gt;
&lt;svg style="height: 1em; transform: translateY(0.1em);" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="M3 16.5v2.25A2.25 2.25 0 0 0 5.25 21h13.5A2.25 2.25 0 0 0 21 18.75V16.5M16.5 12L12 16.5m0 0L7.5 12m4.5 4.5V3"/&gt;&lt;/svg&gt;
&lt;/span&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Please find &lt;em&gt;Chapter 1: Introduction&lt;/em&gt; reproduced in full below:&lt;/p&gt;
&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Artificial intelligence (AI) stands poised to be among the most disruptive technologies of our era. The breakneck pace of recent AI advancements has been spearheaded by machine learning (ML), particularly the resurgence of &lt;em&gt;deep learning&lt;/em&gt;. Deep learning is as old as the first general-purpose electronic computer; with roots tracing back to the 1940s and ’50s (
;
), the revival of deep learning, beginning in the early 2010s, was catalysed by a series of breakthroughs that shattered previously perceived limitations and captivated the collective imagination. These breakthroughs span various domains, including computer vision (
;
;
;
), speech recognition (
;
), natural language processing (
;
), protein folding (
), generative art and artificial creativity  (
;
;
;
), as well as reinforcement learning for robotics control (
;
) and achieving superhuman-level gameplay (
;
).&lt;/p&gt;
&lt;p&gt;Nevertheless, it is crucial to view these developments as means to an ultimate end rather than an end in themselves. Arguably, the true pinnacle of AI’s capabilities lies in optimal &lt;em&gt;decision-making&lt;/em&gt;, whether that entails offering analyses and insights to aid humans in making better decisions or completely automating the decision-making process altogether. Practically any task directed towards a well-defined objective can be boiled down to a cascade of decisions. At a fundamental level, operating a vehicle involves a continuous stream of decisions involving accelerating, braking, and turning. Financial trading revolves around decisions to buy, sell, or hold various assets. Even complex engineering tasks, such as designing an aerofoil, involve a sequence of decisions about adjusting design variables to achieve desirable aerodynamic characteristics.&lt;/p&gt;
&lt;p&gt;Yet, the intricacies of decision-making surpass what any single advancement in deep learning can address. While convolutional neural networks (CNNs) can facilitate object detection tasks in autonomous vehicles, recurrent neural networks (RNNs) can aid in forecasting market dynamics for systematic trading, and physics-informed NNs can assist in predicting aerodynamic effects, it remains the case that no target or quantity of interest can be entirely known or predictable (indeed, if they were, the pursuit of predictive modelling and ML would be superfluous). Instead, predictions often prove unreliable, or at best, &lt;em&gt;uncertain&lt;/em&gt;, due to the limitations of our knowledge and the complexity and variability inherent in the underlying real-world processes. The impressive power of deep learning models often overshadows their ignorance of the limits of their own knowledge and the extent of uncertainty in their predictions. When these predictions are integrated into a sequential decision-making framework, such uncertainty can amplify, compound, and lead to catastrophic consequences. In the context of aeronautical engineering, this could result in inefficient designs; in quantitative finance, it can lead to devastating capital losses; and in autonomous driving, it can even cost lives.&lt;/p&gt;
&lt;h4 id="probabilistic-machine-learning"&gt;Probabilistic Machine Learning&lt;/h4&gt;
&lt;p&gt;Grounded in the laws of probability and Bayesian statistics (
;
), &lt;em&gt;probabilistic&lt;/em&gt; ML provides a consistent framework for systematically reasoning about the unknown. The probabilistic approach to ML acknowledges that the real world is fraught with uncertainty and embraces this uncertainty as an inherent part of decision-making. Unlike traditional methods, including those of deep learning, it recognises model predictions not as absolute truths that can be represented as single &lt;em&gt;point estimates&lt;/em&gt; produced from a deterministic mapping, but as full &lt;em&gt;probability distributions&lt;/em&gt; that capture the potential outcomes of a random variable as it propagates through some underlying data-generating process. In a &lt;em&gt;probabilistic model&lt;/em&gt;, all quantities are treated as random variables governed by probability distributions – the data are treated as observed variables, which are influenced by some underlying hidden variables, e.g., the model parameters. A prior distribution is used to express reasonable values for these hidden variables and to eliminate implausible ones. The relationship between observed and hidden variables is described using the likelihood, and the process of Bayesian inference amounts to calculating, using basic laws of probability, a posterior distribution over the hidden factors conditioned on the observed data, which can be seen as a refinement of the prior beliefs in light of new evidence. While the posterior distribution can be useful in and of itself, its primary role lies in facilitating subsequent prediction and decision-making by providing full probability distributions over predicted outcomes. This capability allows the decision-maker to assess the range of possible scenarios and their associated probabilities, enabling a more nuanced understanding of uncertainty and risk, which is indispensable in complex, dynamic environments where the repercussions of incorrect decisions can be severe. In essence, probabilistic ML equips autonomous decision-making systems with a probabilistic worldview, enabling them to navigate ambiguity and make sound decisions in the face of imperfect information.&lt;/p&gt;
&lt;h4 id="probabilistic-ml-vs-deep-learning"&gt;Probabilistic ML vs. Deep Learning&lt;/h4&gt;
&lt;p&gt;While deep learning has dominated recent AI advances, probabilistic ML remains as important as ever and continues to offer valuable tools for addressing AI challenges that can not be fully resolved by deep learning alone. Although both approaches can be combined to create hybrid methods that leverage their respective strengths, some defining characteristics have traditionally set deep learning apart from probabilistic ML. Perhaps most notably, probabilistic ML approaches can achieve remarkable predictive performance even when data is scarce. In contrast, deep learning models tend to be data-intensive by nature, often demanding datasets of a scale proportional to their size (i.e., their parameter count) (
), which has seen explosive growth in recent years (
;
;
;
;
). With that being said, inference in many probabilistic models poses computational problems that are difficult to scale. On the other hand, deep learning approaches have excelled in scalability, a key factor contributing to their widespread success. This scalability is bolstered by their compatibility with various speed-enhancing mechanisms such as stochastic optimisation, specialised hardware accelerators (GPUs and TPUs), as well as distributed and/or cloud-based computing infrastructure. To bridge this gap, substantial research effort has been devoted to enabling probabilistic ML to benefit from these advantages through optimisation-based approximations to Bayesian inference (
).&lt;/p&gt;
&lt;p&gt;Moreover, as mentioned earlier, these paradigms are by no means mutually exclusive. Indeed, it is often possible to directly extend existing models with a Bayesian treatment of their parameters, adding a layer of probabilistic reasoning to the model, and allowing it to not only make predictions but also estimate the uncertainty associated with those predictions. An excellent example is the BNN, which treats the weights as hidden variables and leverages posterior inference to provide predictions while estimating associated uncertainties, delivering a more robust and principled approach to deep learning (
;
;
).&lt;/p&gt;
&lt;p&gt;The Bayesian formalism naturally gives rise to many popular methods and paradigms, often in the form of point estimates or other kinds of approximations. The quintessential example of this is found in linear regression, in particular, in ridge and lasso regression (
), which correspond variously to maximum &lt;em&gt;a posteriori&lt;/em&gt; (MAP) estimates in Bayesian linear regression (BLR) models with prior distributions possessing different sparsity-inducing characteristics (
) – more broadly, mitigations against over-fitting tend to arise organically in Bayesian methods, which is why they are frequently characterised as being fundamentally more robust against over-fitting (
). Likewise, the once &lt;em&gt;à la mode&lt;/em&gt; support vector machines (SVMs) can be seen as MAP estimates for a class of nonparametric Bayesian models (
), dropout (
) in NNs can be seen as a variational approximation to exact inference in BNNs (
), and unsupervised learning methods such as factor analysis (FA) (
) and principal component analysis (PCA) (
) are instances of a class of LVMs (
;
) known as linear-Gaussian factor models (
), to name just a few examples. Time and again, classical approaches have not only benefitted from being viewed through the Bayesian perspective but have also been enriched and redefined by the depth of insights this framework provides.&lt;/p&gt;
&lt;h3 id="thesis-goals"&gt;Thesis Goals&lt;/h3&gt;
&lt;p&gt;The over-arching goal of this thesis is to continue advancing the integration and cross-pollination between deep learning and probabilistic ML. We aim to further the interplay between these two fields, both by incorporating probabilistic interpretations and uncertainty quantification into popular deep learning frameworks, and by leveraging the representational power of deep NNs to improve established Bayesian methods. This dual-pronged approach provides fresh perspectives and taps the complementary strengths of both paradigms, advancing the foundations of AI and facilitating the development of more capable and dependable decision support frameworks. Ultimately, we strive to unlock the potential of deep learning within high-impact probabilistic ML methodologies, and to lend useful Bayesian perspectives on current deep learning techniques.&lt;/p&gt;
&lt;h4 id="gaussian-process-models"&gt;Gaussian Process Models&lt;/h4&gt;
&lt;p&gt;Arguably, no family of probabilistic models embodies the ethos of probabilistic ML and illustrates its nuances and parallels with deep learning quite like the GP. Accordingly, they shall occupy a prominent place in our thesis. In particular, GPs stand out as the ideal choice when dealing with limited data, offer the flexibility to encode prior beliefs through the covariance function, and provide predictive uncertainty estimates with a fine calibration that is second to none. Conversely, they are challenging to scale to large datasets, a limitation that has spurred extensive research and development efforts. Furthermore, in contrast to deep learning models, which are often lauded for their ability to automatically uncover valuable patterns and features in data, GPs have at times been dismissed as unsophisticated smoothing mechanisms (
). Despite these apparent disparities, GPs are intricately connected to NNs in numerous ways. Among these, one of the most classical and well-known relationships is the convergence of single-layer NNs with randomly initialised weights toward GPs in the infinite-width limit (
). Similar links have also been identified between GPs and infinitely wide &lt;em&gt;deep&lt;/em&gt; NNs (
;
).&lt;/p&gt;
&lt;p&gt;In an effort to elevate the representational capabilities of GPs to a level comparable with deep NNs, DGPs (
) stack together multiple layers of GPs. Additional efforts to construct efficient sparse GP approximations have leveraged the advantageous properties of computations on the hypersphere (
), which has led to deep GP (DGP) models in which the propagation of posterior predictive means is equivalent to a forward pass through a deep neural network (NN) (
;
). Notably, as a side effect, this model effectively provides uncertainty estimates for deep NN through its predictive variance. Among the contributions of our thesis is the further development of this framework, integrating cutting-edge techniques (
;
) to address some of its practical limitations, thereby narrowing the performance gap between GPs and deep NNs.&lt;/p&gt;
&lt;p&gt;Probabilistic models, serving a crucial role as decision support tools, routinely aid scientific discovery in fields such as physics and astronomy, guiding advancements in areas of medicine and healthcare encompassing bioinformatics, epidemiology, and medical diagnosis. Beyond that, these models have wide-ranging applications in economics, econometrics, and the social sciences. Moreover, they are indispensable in various engineering disciplines, such as robotics and environmental engineering. Among the many probabilistic models, GPs stand out as a powerful driving force behind a number of important sequential decision-making frameworks, including active learning (
) and reinforcement learning (
), and the broader area of probabilistic numerics at large (
). Notably, Bayesian optimisation (BO) (
;
;
) is one major area that relies heavily on GPs and will feature extensively in our thesis.&lt;/p&gt;
&lt;h4 id="bayesian-optimisation"&gt;Bayesian optimisation&lt;/h4&gt;
&lt;p&gt;BO is a powerful methodology dedicated to the global optimisation of complex and resource-intensive objective functions. In contrast to classical optimisation methods, BO excels even when dealing with functions that lack strong assumptions or guarantees. These functions may not be convex, possess no gradients, lack a well-defined mathematical form, and observable only indirectly through noisy measurements.&lt;/p&gt;
&lt;p&gt;At its core, BO is a sequential decision-making algorithm.&lt;/p&gt;
&lt;p&gt;It relies on observations from past function evaluations to determine the next candidate location for evaluation in pursuit of optimal solutions. BO leverages a probabilistic model, often a GP, to represent its knowledge and beliefs about the unknown function. This model is continuously updated with the acquisition of each new observation, enabling the algorithm to adapt its behaviour and make sound decisions based on the evolving information.&lt;/p&gt;
&lt;p&gt;BO effectively manages uncertainty inherent in such sequential decision-making processes by making use of the probabilistic model to the fullest, harnessing the entire predictive distribution, particularly, the predictive uncertainty, to select promising candidate solutions that bring the most value to the optimisation process. This generally consists not merely of those most likely to optimise the objective function (i.e., &lt;em&gt;exploiting&lt;/em&gt; that which is known), but also those likely to reveal the most knowledge and information about the function itself (i.e., &lt;em&gt;exploring&lt;/em&gt; that which remains unknown).&lt;/p&gt;
&lt;p&gt;This pronounced emphasis on well-calibrated uncertainty distinguishes BO as one of the standout “killer apps” for GPs and a jewel in the crown of probabilistic ML applications. In practice, BO has proven instrumental across science, engineering, and industry, where efficiency and cost-effectiveness are paramount. Its applications include protein engineering (
;
), material discovery (
), experimental physics (e.g., experiments involving ultra-cold atoms (
) and free-electron lasers (
)), environmental monitoring (sensor placement) (
;
), and the design of aerodynamic aerofoils (
;
), integrated circuits (
;
), broadband high-efficiency power amplifiers (
), and fast-charging protocols for lithium-ion batteries (
). Notably, it has played a crucial role in automating the hyperparameter tuning of various ML models (
;
), especially deep learning models, thus representing yet another way in which probabilistic ML has contributed to the advancement of deep learning.&lt;/p&gt;
&lt;p&gt;However, GPs are not universally suitable for all BO problem scenarios. They are most effective when dealing with smooth, stationary functions with homoscedastic noise and a relatively modest input dimensionality. Additionally, GPs are easiest to work with for functions with a single output and purely continuous inputs. While a surprisingly wide array of real-world challenges satisfy these conditions, many high-impact problems, such as gene and protein design, which involves sequential inputs (
;
;
;
;
); NAS, which involves structured inputs with intricate conditional dependencies; and automotive safety engineering, which involve numerous constraints and multiple objectives, clearly fall outside of this scope. This is not to say that GPs cannot be extended to such challenging scenarios. However, such extensions almost always come at a cost. Consequently, it makes sense to appeal to alternative modelling paradigms more naturally suited to specific tasks, e.g., employing random forests (RFs) to handle discrete and structured inputs, or deep NNs for capturing nonstationary behaviour and dealing with multiple objectives. A major contribution of this thesis is the introduction of a new formulation of BO that seamlessly accommodates virtually any modelling paradigm, including deep learning, without any compromise.&lt;/p&gt;
&lt;h3 id="thesis-overview"&gt;Thesis Overview&lt;/h3&gt;
&lt;p&gt;The core contributions of our thesis are summarised as follows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;span id="item:contrib-orthogonal-sparse-spherical-gp" label="item:contrib-orthogonal-sparse-spherical-gp"&gt;&lt;/span&gt; We improve upon the framework for sparse hyperspherical GP approximations that employ nonlinear activations as inter-domain inducing features. This framework serves as a bridge between GPs and NNs, with posterior predictive mean taking the form of single-layer feedforward NNs. Our thesis examines some practical issues associated with this approach and proposes an extension that takes advantage of the orthogonal decoupling of GPs to mitigate these limitations. In particular, we introduce spherical inter-domain features to construct more flexible data-dependent basis functions for both the principal and orthogonal components of the GP approximation. We demonstrate that incorporating orthogonal inducing variables under this framework not only alleviates these shortcomings but also offers superior scalability compared to alternative strategies.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;span id="item:contrib-cycle-bayes" label="item:contrib-cycle-bayes"&gt;&lt;/span&gt; We provide a probabilistic perspective on cycle-consistent adversarial networks (CYCLEGANs), a cutting-edge deep generative model for style transfer and image-to-image translation. Specifically, we frame the problem of learning cross-domain correspondences without paired data as Bayesian inference in a latent variable model (LVM), in which the goal is to uncover the hidden representations of entities from one domain as entities in another. First, we introduce implicit LVMs, which allow flexible prior specification over latent representations as implicit distributions. Next, we develop a new variational inference (VI) framework that minimises a symmetrised statistical divergence between the variational and true joint distributions. Finally, we show that CYCLEGANs emerge as a closely-related variant of our framework, providing a useful interpretation as a Bayesian approximation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;span id="item:contrib-bore" label="item:contrib-bore"&gt;&lt;/span&gt; We introduce a model-agnostic formulation of BO based on classification. Building on the established links between class-probability estimation (CPE), density-ratio estimation (DRE), and the improvement-based acquisition functions, we reformulate the acquisition function as a binary classifier over candidate solutions. This approach eliminates the need for an explicit probabilistic model of the objective function and casts aside the limitations of tractability constraints. As a result, our model-agnostic BO approach substantially broadens its applicability across diverse problem scenarios, accommodating flexible and scalable modelling paradigms such as deep learning without necessitating approximations or sacrificing expressive and representational capacity.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Accordingly, our thesis is organised as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Chapter 2 (Background) lays the necessary groundwork for our thesis. We begin by outlining the fundamental principles of probability and Bayesian statistics, which form the basis of probabilistic ML. Additionally, we introduce the widely-adopted method of approximate Bayesian inference known as VI. Our discussion underscores the central role played by statistical divergences, prompting us to delve into a larger family of divergences and motivating our discussion of DRE. With a solid foundation in place, we shift our focus to GPs, providing an introductory overview and highlighting the most commonly-used sparse approximations. Finally, we conclude this background chapter by introducing the basic concepts behind BO.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chapter 3 (Orthogonally-Decoupled Sparse GPs with Spherical Inducing Features) examines orthogonally-decoupled sparse GPs with spherical NN activation features, as summarised in the corresponding item above.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chapter 4 (Cycle-Consistent Adversarial Learning as Bayesian Inference) examines from the perspective of approximate Bayesian inference, as summarised in the corresponding item above.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chapter 5 (Bayesian Optimization by Density-Ratio Estimation) examines our model-agnostic approach to BO based on binary classification and DRE, as summarised in the corresponding item above.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chapter 6 (Conclusion) brings this thesis to a close by reflecting on our main contributions and situating them in the broader landscape of probabilistic methods in ML. Finally, we conclude by presenting our outlook on the avenues for future research and development in this rapidly evolving field.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="references"&gt;References&lt;/h3&gt;
&lt;div id="refs" class="references csl-bib-body hanging-indent" entry-spacing="0" line-spacing="2"&gt;
&lt;div id="ref-anil2023palm" class="csl-entry"&gt;
&lt;p&gt;Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, D., Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, Z., et al. (2023). Palm 2 technical report. &lt;em&gt;arXiv Preprint arXiv:2305.10403&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-attia2020closed" class="csl-entry"&gt;
&lt;p&gt;Attia, P. M., Grover, A., Jin, N., Severson, K. A., Markov, T. M., Liao, Y.-H., Chen, M. H., Cheong, B., Perkins, N., Yang, Z., et al. (2020). Closed-loop optimization of fast-charging protocols for batteries with machine learning. &lt;em&gt;Nature&lt;/em&gt;, &lt;em&gt;578&lt;/em&gt;(7795), 397–402.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-bartholomew2011latent" class="csl-entry"&gt;
&lt;p&gt;Bartholomew, D. J., Knott, M., &amp;amp; Moustaki, I. (2011). &lt;em&gt;Latent variable models and factor analysis: A unified approach&lt;/em&gt;. John Wiley &amp;amp; Sons.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-bayes1763lii" class="csl-entry"&gt;
&lt;p&gt;Bayes, T. (1763). LII. An essay towards solving a problem in the doctrine of chances. By the late rev. Mr. Bayes, FRS communicated by mr. Price, in a letter to john canton, AMFR s. &lt;em&gt;Philosophical Transactions of the Royal Society of London&lt;/em&gt;, &lt;em&gt;53&lt;/em&gt;, 370–418.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-blundell2015weight" class="csl-entry"&gt;
&lt;p&gt;Blundell, C., Cornebise, J., Kavukcuoglu, K., &amp;amp; Wierstra, D. (2015). Weight uncertainty in neural network. &lt;em&gt;International Conference on Machine Learning&lt;/em&gt;, 1613–1622.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-brochu2010tutorial" class="csl-entry"&gt;
&lt;p&gt;Brochu, E., Cora, V. M., &amp;amp; De Freitas, N. (2010). A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. &lt;em&gt;arXiv Preprint arXiv:1012.2599&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-brown2020language" class="csl-entry"&gt;
&lt;p&gt;Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;33&lt;/em&gt;, 1877–1901.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-chen2015bayesian" class="csl-entry"&gt;
&lt;p&gt;Chen, P., Merrick, B. M., &amp;amp; Brazil, T. J. (2015). Bayesian optimization for broadband high-efficiency power amplifier designs. &lt;em&gt;IEEE Transactions on Microwave Theory and Techniques&lt;/em&gt;, &lt;em&gt;63&lt;/em&gt;(12), 4263–4272.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-damianou2013deep" class="csl-entry"&gt;
&lt;p&gt;Damianou, A., &amp;amp; Lawrence, N. D. (2013). Deep gaussian processes. &lt;em&gt;Artificial Intelligence and Statistics&lt;/em&gt;, 207–215.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-deisenroth2011pilco" class="csl-entry"&gt;
&lt;p&gt;Deisenroth, M., &amp;amp; Rasmussen, C. E. (2011). PILCO: A model-based and data-efficient approach to policy search. &lt;em&gt;Proceedings of the 28th International Conference on Machine Learning (ICML-11)&lt;/em&gt;, 465–472.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-duris2020bayesian" class="csl-entry"&gt;
&lt;p&gt;Duris, J., Kennedy, D., Hanuka, A., Shtalenkova, J., Edelen, A., Baxevanis, P., Egger, A., Cope, T., McIntire, M., Ermon, S., et al. (2020). Bayesian optimization of a free-electron laser. &lt;em&gt;Physical Review Letters&lt;/em&gt;, &lt;em&gt;124&lt;/em&gt;(12), 124801.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-dutordoir2020sparse" class="csl-entry"&gt;
&lt;p&gt;Dutordoir, V., Durrande, N., &amp;amp; Hensman, J. (2020). Sparse Gaussian processes with spherical harmonic features. &lt;em&gt;International Conference on Machine Learning&lt;/em&gt;, 2793–2802.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-dutordoir2021deep" class="csl-entry"&gt;
&lt;p&gt;Dutordoir, V., Hensman, J., Wilk, M. van der, Ek, C. H., Ghahramani, Z., &amp;amp; Durrande, N. (2021). Deep neural networks as point estimates for deep Gaussian processes. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;34&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-forrester2009recent" class="csl-entry"&gt;
&lt;p&gt;Forrester, A. I., &amp;amp; Keane, A. J. (2009). Recent advances in surrogate-based optimization. &lt;em&gt;Progress in Aerospace Sciences&lt;/em&gt;, &lt;em&gt;45&lt;/em&gt;(1-3), 50–79.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-gal2016dropout" class="csl-entry"&gt;
&lt;p&gt;Gal, Y., &amp;amp; Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. &lt;em&gt;International Conference on Machine Learning&lt;/em&gt;, 1050–1059.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-garnett_bayesoptbook_2023" class="csl-entry"&gt;
&lt;p&gt;Garnett, R. (2023). &lt;em&gt;&lt;span class="nocase"&gt;Bayesian Optimization&lt;/span&gt;&lt;/em&gt;. Cambridge University Press.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-garnett2010bayesian" class="csl-entry"&gt;
&lt;p&gt;Garnett, R., Osborne, M. A., &amp;amp; Roberts, S. J. (2010). Bayesian optimization for sensor set selection. &lt;em&gt;Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks&lt;/em&gt;, 209–219.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-gelman2013bayesian" class="csl-entry"&gt;
&lt;p&gt;Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., &amp;amp; Rubin, D. B. (2013). &lt;em&gt;Bayesian data analysis&lt;/em&gt;. CRC press.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-girshick2014rich" class="csl-entry"&gt;
&lt;p&gt;Girshick, R., Donahue, J., Darrell, T., &amp;amp; Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. &lt;em&gt;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition&lt;/em&gt;, 580–587.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-gonzalez2015bayesian" class="csl-entry"&gt;
&lt;p&gt;Gonzalez, J., Longworth, J., James, D. C., &amp;amp; Lawrence, N. D. (2015). Bayesian optimization for synthetic gene design. &lt;em&gt;arXiv Preprint arXiv:1505.01627&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-goodfellow2014generative" class="csl-entry"&gt;
&lt;p&gt;Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., &amp;amp; Bengio, Y. (2014). Generative adversarial networks. &lt;em&gt;arXiv Preprint arXiv:1406.2661&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-graves2013speech" class="csl-entry"&gt;
&lt;p&gt;Graves, A., Mohamed, A., &amp;amp; Hinton, G. (2013). Speech recognition with deep recurrent neural networks. &lt;em&gt;2013 IEEE International Conference on Acoustics, Speech and Signal Processing&lt;/em&gt;, 6645–6649.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-hennig2022probabilistic" class="csl-entry"&gt;
&lt;p&gt;Hennig, P., Osborne, M. A., &amp;amp; Kersting, H. P. (2022). &lt;em&gt;Probabilistic numerics&lt;/em&gt;. Cambridge University Press.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-hie2022adaptive" class="csl-entry"&gt;
&lt;p&gt;Hie, B. L., &amp;amp; Yang, K. K. (2022). Adaptive machine learning for protein engineering. &lt;em&gt;Current Opinion in Structural Biology&lt;/em&gt;, &lt;em&gt;72&lt;/em&gt;, 145–152.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-hinton2012deep" class="csl-entry"&gt;
&lt;p&gt;Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. &lt;em&gt;IEEE Signal Processing Magazine&lt;/em&gt;, &lt;em&gt;29&lt;/em&gt;(6), 82–97.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-ho2020denoising" class="csl-entry"&gt;
&lt;p&gt;Ho, J., Jain, A., &amp;amp; Abbeel, P. (2020). Denoising diffusion probabilistic models. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;33&lt;/em&gt;, 6840–6851.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-hoffmann2022training" class="csl-entry"&gt;
&lt;p&gt;Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. de L., Hendricks, L. A., Welbl, J., Clark, A., et al. (2022). Training compute-optimal large language models. &lt;em&gt;arXiv Preprint arXiv:2203.15556&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-houlsby2011bayesian" class="csl-entry"&gt;
&lt;p&gt;Houlsby, N., Huszár, F., Ghahramani, Z., &amp;amp; Lengyel, M. (2011). Bayesian active learning for classification and preference learning. &lt;em&gt;arXiv Preprint arXiv:1112.5745&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-jordan1998introduction" class="csl-entry"&gt;
&lt;p&gt;Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., &amp;amp; Saul, L. K. (1998). An introduction to variational methods for graphical models. &lt;em&gt;Learning in Graphical Models&lt;/em&gt;, 105–161.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-jumper2021highly" class="csl-entry"&gt;
&lt;p&gt;Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al. (2021). Highly accurate protein structure prediction with AlphaFold. &lt;em&gt;Nature&lt;/em&gt;, &lt;em&gt;596&lt;/em&gt;(7873), 583–589.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-krizhevsky2012imagenet" class="csl-entry"&gt;
&lt;p&gt;Krizhevsky, A., Sutskever, I., &amp;amp; Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;25&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-lam2018advances" class="csl-entry"&gt;
&lt;p&gt;Lam, R., Poloczek, M., Frazier, P., &amp;amp; Willcox, K. E. (2018). Advances in bayesian optimization with applications in aerospace engineering. &lt;em&gt;2018 AIAA Non-Deterministic Approaches Conference&lt;/em&gt;, 1656.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-laplace1814theorie" class="csl-entry"&gt;
&lt;p&gt;Laplace, P. S. (1814). &lt;em&gt;Théorie analytique des probabilités&lt;/em&gt;. Courcier.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-lee2017deep" class="csl-entry"&gt;
&lt;p&gt;Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., &amp;amp; Sohl-Dickstein, J. (2017). Deep neural networks as gaussian processes. &lt;em&gt;arXiv Preprint arXiv:1711.00165&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-lillicrap2015continuous" class="csl-entry"&gt;
&lt;p&gt;Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., &amp;amp; Wierstra, D. (2015). Continuous control with deep reinforcement learning. &lt;em&gt;arXiv Preprint arXiv:1509.02971&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-lyu2017efficient" class="csl-entry"&gt;
&lt;p&gt;Lyu, W., Xue, P., Yang, F., Yan, C., Hong, Z., Zeng, X., &amp;amp; Zhou, D. (2017). An efficient bayesian optimization approach for automated optimization of analog circuits. &lt;em&gt;IEEE Transactions on Circuits and Systems I: Regular Papers&lt;/em&gt;, &lt;em&gt;65&lt;/em&gt;(6), 1954–1967.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-mackay1992practical" class="csl-entry"&gt;
&lt;p&gt;MacKay, D. J. (1992). A practical bayesian framework for backpropagation networks. &lt;em&gt;Neural Computation&lt;/em&gt;, &lt;em&gt;4&lt;/em&gt;(3), 448–472.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-mackay2003information" class="csl-entry"&gt;
&lt;p&gt;MacKay, D. J. (2003). &lt;em&gt;Information theory, inference and learning algorithms&lt;/em&gt;. Cambridge university press.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-marchant2012bayesian" class="csl-entry"&gt;
&lt;p&gt;Marchant, R., &amp;amp; Ramos, F. (2012). Bayesian optimisation for intelligent environmental monitoring. &lt;em&gt;2012 IEEE/RSJ International Conference on Intelligent Robots and Systems&lt;/em&gt;, 2242–2249.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-matthews2018gaussian" class="csl-entry"&gt;
&lt;p&gt;Matthews, A. G. de G., Rowland, M., Hron, J., Turner, R. E., &amp;amp; Ghahramani, Z. (2018). Gaussian process behaviour in wide deep neural networks. &lt;em&gt;arXiv Preprint arXiv:1804.11271&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-mcculloch1943logical" class="csl-entry"&gt;
&lt;p&gt;McCulloch, W. S., &amp;amp; Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. &lt;em&gt;The Bulletin of Mathematical Biophysics&lt;/em&gt;, &lt;em&gt;5&lt;/em&gt;, 115–133.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-mnih2013playing" class="csl-entry"&gt;
&lt;p&gt;Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., &amp;amp; Riedmiller, M. (2013). Playing atari with deep reinforcement learning. &lt;em&gt;arXiv Preprint arXiv:1312.5602&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-mnih2015human" class="csl-entry"&gt;
&lt;p&gt;Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. &lt;em&gt;Nature&lt;/em&gt;, &lt;em&gt;518&lt;/em&gt;(7540), 529–533.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-moss2020boss" class="csl-entry"&gt;
&lt;p&gt;Moss, H. B., Beck, D., González, J., Leslie, D. S., &amp;amp; Rayson, P. (2020). BOSS: Bayesian optimization over string spaces. &lt;em&gt;arXiv Preprint arXiv:2010.00979&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-neal1995bayesian" class="csl-entry"&gt;
&lt;p&gt;Neal, R. M. (1995). &lt;em&gt;BAYESIAN LEARNING FOR NEURAL NETWORKS&lt;/em&gt; &lt;/p&gt;
\[PhD thesis\]&lt;p&gt;. University of Toronto.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-openai2023gpt" class="csl-entry"&gt;
&lt;p&gt;OpenAI, R. (2023). GPT-4 technical report. &lt;em&gt;arXiv&lt;/em&gt;, 2303–08774.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-opper2000gaussian" class="csl-entry"&gt;
&lt;p&gt;Opper, M., &amp;amp; Winther, O. (2000). &lt;em&gt;Gaussian processes and SVM: Mean field results and leave-one-out&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-pearson1901liii" class="csl-entry"&gt;
&lt;p&gt;Pearson, K. (1901). LIII. On lines and planes of closest fit to systems of points in space. &lt;em&gt;The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science&lt;/em&gt;, &lt;em&gt;2&lt;/em&gt;(11), 559–572.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-rae2021scaling" class="csl-entry"&gt;
&lt;p&gt;Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., et al. (2021). Scaling language models: Methods, analysis &amp;amp; insights from training gopher. &lt;em&gt;arXiv Preprint arXiv:2112.11446&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-ramesh2022hierarchical" class="csl-entry"&gt;
&lt;p&gt;Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., &amp;amp; Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. &lt;em&gt;arXiv Preprint arXiv:2204.06125&lt;/em&gt;, &lt;em&gt;1&lt;/em&gt;(2), 3.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-10.7551/mitpress/3206.001.0001" class="csl-entry"&gt;
&lt;p&gt;Rasmussen, C. E., &amp;amp; Williams, C. K. I. (2005). &lt;em&gt;&lt;span class="nocase"&gt;Gaussian Processes for Machine Learning&lt;/span&gt;&lt;/em&gt;. The MIT Press.
&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-redmon2016you" class="csl-entry"&gt;
&lt;p&gt;Redmon, J., Divvala, S., Girshick, R., &amp;amp; Farhadi, A. (2016). You only look once: Unified, real-time object detection. &lt;em&gt;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition&lt;/em&gt;, 779–788.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-rombach2022high" class="csl-entry"&gt;
&lt;p&gt;Rombach, R., Blattmann, A., Lorenz, D., Esser, P., &amp;amp; Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. &lt;em&gt;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition&lt;/em&gt;, 10684–10695.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-romero2013navigating" class="csl-entry"&gt;
&lt;p&gt;Romero, P. A., Krause, A., &amp;amp; Arnold, F. H. (2013). Navigating the protein fitness landscape with gaussian processes. &lt;em&gt;Proceedings of the National Academy of Sciences&lt;/em&gt;, &lt;em&gt;110&lt;/em&gt;(3), E193–E201.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-ronneberger2015u" class="csl-entry"&gt;
&lt;p&gt;Ronneberger, O., Fischer, P., &amp;amp; Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. &lt;em&gt;Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18&lt;/em&gt;, 234–241.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-rosenblatt1958perceptron" class="csl-entry"&gt;
&lt;p&gt;Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. &lt;em&gt;Psychological Review&lt;/em&gt;, &lt;em&gt;65&lt;/em&gt;(6), 386.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-roweis1999unifying" class="csl-entry"&gt;
&lt;p&gt;Roweis, S., &amp;amp; Ghahramani, Z. (1999). A unifying review of linear gaussian models. &lt;em&gt;Neural Computation&lt;/em&gt;, &lt;em&gt;11&lt;/em&gt;(2), 305–345.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-salimbeni2018orthogonally" class="csl-entry"&gt;
&lt;p&gt;Salimbeni, H., Cheng, C.-A., Boots, B., &amp;amp; Deisenroth, M. (2018). Orthogonally decoupled variational Gaussian processes. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;31&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-seko2015prediction" class="csl-entry"&gt;
&lt;p&gt;Seko, A., Togo, A., Hayashi, H., Tsuda, K., Chaput, L., &amp;amp; Tanaka, I. (2015). Prediction of low-thermal-conductivity compounds with first-principles anharmonic lattice-dynamics calculations and bayesian optimization. &lt;em&gt;Physical Review Letters&lt;/em&gt;, &lt;em&gt;115&lt;/em&gt;(20), 205901.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-shahriari2015taking" class="csl-entry"&gt;
&lt;p&gt;Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., &amp;amp; De Freitas, N. (2015). Taking the human out of the loop: A review of bayesian optimization. &lt;em&gt;Proceedings of the IEEE&lt;/em&gt;, &lt;em&gt;104&lt;/em&gt;(1), 148–175.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-shi2020sparse" class="csl-entry"&gt;
&lt;p&gt;Shi, J., Titsias, M., &amp;amp; Mnih, A. (2020). Sparse orthogonal variational inference for Gaussian processes. &lt;em&gt;International Conference on Artificial Intelligence and Statistics&lt;/em&gt;, 1932–1942.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-shoeybi2019megatron" class="csl-entry"&gt;
&lt;p&gt;Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., &amp;amp; Catanzaro, B. (2019). Megatron-lm: Training multi-billion parameter language models using model parallelism. &lt;em&gt;arXiv Preprint arXiv:1909.08053&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-silver2016mastering" class="csl-entry"&gt;
&lt;p&gt;Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. &lt;em&gt;Nature&lt;/em&gt;, &lt;em&gt;529&lt;/em&gt;(7587), 484–489.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-snoek2012practical" class="csl-entry"&gt;
&lt;p&gt;Snoek, J., Larochelle, H., &amp;amp; Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;25&lt;/em&gt;, 2951–2959.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-spearman1904general" class="csl-entry"&gt;
&lt;p&gt;Spearman, C. (1904). &amp;quot; general intelligence,&amp;quot; objectively determined and measured. &lt;em&gt;The American Journal of Psychology&lt;/em&gt;, &lt;em&gt;15&lt;/em&gt;(2), 201–292.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-srivastava2014dropout" class="csl-entry"&gt;
&lt;p&gt;Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., &amp;amp; Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. &lt;em&gt;The Journal of Machine Learning Research&lt;/em&gt;, &lt;em&gt;15&lt;/em&gt;(1), 1929–1958.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-sun2020neural" class="csl-entry"&gt;
&lt;p&gt;Sun, S., Shi, J., &amp;amp; Grosse, R. B. (2020). Neural networks as inter-domain inducing points. &lt;em&gt;Third Symposium on Advances in Approximate Bayesian Inference&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-tibshirani1996regression" class="csl-entry"&gt;
&lt;p&gt;Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. &lt;em&gt;Journal of the Royal Statistical Society Series B: Statistical Methodology&lt;/em&gt;, &lt;em&gt;58&lt;/em&gt;(1), 267–288.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-tipping1999probabilistic" class="csl-entry"&gt;
&lt;p&gt;Tipping, M. E., &amp;amp; Bishop, C. M. (1999). Probabilistic principal component analysis. &lt;em&gt;Journal of the Royal Statistical Society: Series B (Statistical Methodology)&lt;/em&gt;, &lt;em&gt;61&lt;/em&gt;(3), 611–622.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-torun2018global" class="csl-entry"&gt;
&lt;p&gt;Torun, H. M., Swaminathan, M., Davis, A. K., &amp;amp; Bellaredj, M. L. F. (2018). A global bayesian optimization algorithm and its application to integrated system design. &lt;em&gt;IEEE Transactions on Very Large Scale Integration (VLSI) Systems&lt;/em&gt;, &lt;em&gt;26&lt;/em&gt;(4), 792–802.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-touvron2023llama" class="csl-entry"&gt;
&lt;p&gt;Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. (2023). Llama 2: Open foundation and fine-tuned chat models. &lt;em&gt;arXiv Preprint arXiv:2307.09288&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-turner2021bayesian" class="csl-entry"&gt;
&lt;p&gt;Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., &amp;amp; Guyon, I. (2021). Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. &lt;em&gt;NeurIPS 2020 Competition and Demonstration Track&lt;/em&gt;, 3–26.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-vaswani2017attention" class="csl-entry"&gt;
&lt;p&gt;Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., &amp;amp; Polosukhin, I. (2017). Attention is all you need. &lt;em&gt;Advances in Neural Information Processing Systems&lt;/em&gt;, &lt;em&gt;30&lt;/em&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-wigley2016fast" class="csl-entry"&gt;
&lt;p&gt;Wigley, P. B., Everitt, P. J., Hengel, A. van den, Bastian, J. W., Sooriyabandara, M. A., McDonald, G. D., Hardman, K. S., Quinlivan, C. D., Manju, P., Kuhn, C. C., et al. (2016). Fast machine-learning online optimization of ultra-cold-atom experiments. &lt;em&gt;Scientific Reports&lt;/em&gt;, &lt;em&gt;6&lt;/em&gt;(1), 25890.&lt;/p&gt;
&lt;/div&gt;
&lt;div id="ref-yang2019machine" class="csl-entry"&gt;
&lt;p&gt;Yang, K. K., Wu, Z., &amp;amp; Arnold, F. H. (2019). Machine-learning-guided directed evolution for protein engineering. &lt;em&gt;Nature Methods&lt;/em&gt;, &lt;em&gt;16&lt;/em&gt;(8), 687–694.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Batch Bayesian Optimisation via Density-ratio Estimation with Guarantees</title><link>https://tiao.io/publications/batch-bore-guarantees/</link><pubDate>Thu, 01 Dec 2022 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/batch-bore-guarantees/</guid><description/></item><item><title>📄 One paper accepted to NeurIPS 2022</title><link>https://tiao.io/posts/one-paper-accepted-to-neurips2022/</link><pubDate>Sat, 15 Oct 2022 18:36:59 +0000</pubDate><guid>https://tiao.io/posts/one-paper-accepted-to-neurips2022/</guid><description>&lt;p&gt;Our paper
was accepted
to NeurIPS 2022. Led by Rafael Oliveira, this is the batch extension of
with theoretical convergence
guarantees for parallel Bayesian optimization. Joint work with Rafael
Oliveira, Edwin Bonilla, and Fabio Ramos.&lt;/p&gt;</description></item><item><title>Long Talk: BORE — Bayesian Optimization by Density-Ratio Estimation</title><link>https://tiao.io/events/icml2021-bore/</link><pubDate>Wed, 21 Jul 2021 14:00:00 +0000</pubDate><guid>https://tiao.io/events/icml2021-bore/</guid><description/></item><item><title>BORE</title><link>https://tiao.io/projects/bore/</link><pubDate>Thu, 01 Jul 2021 00:00:00 +0000</pubDate><guid>https://tiao.io/projects/bore/</guid><description>&lt;p&gt;
is the reference implementation of
(Tiao et al., ICML 2021). It recasts the acquisition function in
as a probabilistic classification
problem via
,
sidestepping the analytical-tractability constraints of conventional
surrogate-based methods.&lt;/p&gt;
&lt;p&gt;Developed with Aaron Klein.&lt;/p&gt;</description></item><item><title>Invited Talk: BORE — Bayesian Optimization by Density-Ratio Estimation</title><link>https://tiao.io/events/ellis-automl-seminars-2021/</link><pubDate>Wed, 12 May 2021 16:00:00 +0000</pubDate><guid>https://tiao.io/events/ellis-automl-seminars-2021/</guid><description/></item><item><title>📄 One paper accepted to ICML 2021</title><link>https://tiao.io/posts/one-paper-accepted-to-icml2021/</link><pubDate>Sat, 08 May 2021 00:00:00 +0000</pubDate><guid>https://tiao.io/posts/one-paper-accepted-to-icml2021/</guid><description>&lt;p&gt;Our paper
was
accepted to ICML 2021 as a &lt;strong&gt;Long Talk&lt;/strong&gt; (awarded to the top 3% of submissions).
This is joint work with Aaron Klein, Cédric Archambeau, Edwin Bonilla, Matthias
Seeger, and Fabio Ramos — much of it carried out during my AWS Berlin
internship.&lt;/p&gt;</description></item><item><title>BORE: Bayesian Optimization by Density-Ratio Estimation</title><link>https://tiao.io/publications/bore-2/</link><pubDate>Sat, 08 May 2021 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/bore-2/</guid><description>&lt;p&gt;&lt;strong&gt;B&lt;/strong&gt;ayesian &lt;strong&gt;O&lt;/strong&gt;ptimization (BO) by Density-&lt;strong&gt;R&lt;/strong&gt;atio &lt;strong&gt;E&lt;/strong&gt;stimation (DRE),
or &lt;strong&gt;BORE&lt;/strong&gt;, is a simple, yet effective framework for the optimization of
blackbox functions.
BORE is built upon the correspondence between &lt;em&gt;expected improvement (EI)&lt;/em&gt;&amp;mdash;arguably
the predominant &lt;em&gt;acquisition functions&lt;/em&gt; used in BO&amp;mdash;and the &lt;em&gt;density-ratio&lt;/em&gt;
between two unknown distributions.&lt;/p&gt;
&lt;p&gt;One of the far-reaching consequences of this correspondence is that we can
reduce the computation of EI to a &lt;em&gt;probabilistic classification&lt;/em&gt; problem&amp;mdash;a
problem we are well-equipped to tackle, as evidenced by the broad range of
streamlined, easy-to-use and, perhaps most importantly, battle-tested
tools and frameworks available at our disposal for applying a variety of approaches.
Notable among these are
/
and
/
for Deep Learning,
for Gradient Tree Boosting,
not to mention
for just about
everything else.
The BORE framework lets us take direct advantage of these tools.&lt;/p&gt;
&lt;h2 id="code-example"&gt;Code Example&lt;/h2&gt;
&lt;p&gt;We provide an simple example with Keras to give you a taste of how BORE can
be implemented using a feed-forward &lt;em&gt;neural network (NN)&lt;/em&gt; classifier.
A useful class that the
package provides is
,
a subclass of
from
Keras that inherits all of its existing functionalities, and provides just
one additional method.
We can build and compile a feed-forward NN classifier as usual:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;bore.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MaximizableSequential&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;tensorflow.keras.layers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Dense&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# build model&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;classifier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MaximizableSequential&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;relu&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;relu&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;sigmoid&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# compile model&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;optimizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;adam&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;loss&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;binary_crossentropy&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;See
from the
if this seems unfamiliar to
you.&lt;/p&gt;
&lt;p&gt;The additional method provided is &lt;code&gt;argmax&lt;/code&gt;, which returns the &lt;em&gt;maximizer&lt;/em&gt; of
the network, i.e. the input $\mathbf{x}$ that maximizes the final output of
the network:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;x_argmax&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bounds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bounds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;L-BFGS-B&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;num_start_points&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Since the network is differentiable end-to-end wrt to input $\mathbf{x}$, this
method can be implemented efficiently using a &lt;em&gt;multi-started quasi-Newton
hill-climber&lt;/em&gt; such as
.
We will see the pivotal role this method plays in the next section.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Using this classifier, the BO loop in BORE looks as follows:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;np&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;targets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# initialize design&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;features&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;features_initial_design&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;targets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;targets_initial_design&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_iterations&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# construct classification problem&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;vstack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;features&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hstack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;tau&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;quantile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;z&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;less&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tau&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# update classifier&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;z&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;epochs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# suggest new candidate&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;x_next&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;classifier&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argmax&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bounds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bounds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;L-BFGS-B&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;num_start_points&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# evaluate blackbox&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;y_next&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;blackbox&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x_next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="c1"&gt;# update dataset&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;features&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x_next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="n"&gt;targets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;hr&gt;
&lt;p&gt;Let&amp;rsquo;s break this down a bit:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;At the start of the loop, we construct the classification problem&amp;mdash;by labeling
instances $\mathbf{x}$ whose corresponding target value $y$ is in the top
&lt;code&gt;q=0.25&lt;/code&gt; quantile of all target values as &lt;em&gt;positive&lt;/em&gt;, and the rest as &lt;em&gt;negative&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, we train the classifier to discriminate between these instances. This
classifier should converge towards
&lt;/p&gt;
$$
\pi^{*}(\mathbf{x}) = \frac{\gamma \ell(\mathbf{x})}{\gamma \ell(\mathbf{x}) + (1-\gamma) g(\mathbf{x})},
$$&lt;p&gt;
where $\ell(\mathbf{x})$ and $g(\mathbf{x})$ are the unknown distributions of
instances belonging to the positive and negative classes, respectively, and
$\gamma$ is the class balance-rate and, by construction, simply the quantile
we specified (i.e. $\gamma=0.25$).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the classifier is a decent approximation to $\pi^{*}(\mathbf{x})$, we
propose the maximizer of this classifier as the next input to evaluate.
In other words, we are now using the classifier &lt;em&gt;itself&lt;/em&gt; as the acquisition
function.&lt;/p&gt;
&lt;p&gt;How is it justifiable to use this in lieu of EI, or some other acquisition
function we&amp;rsquo;re used to?
And what is so special about $\pi^{*}(\mathbf{x})$?&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Well, as it turns out, $\pi^{*}(\mathbf{x})$ is equivalent to EI, up to some
constant factors.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The remainder of the loop should now be self-explanatory. Namely, we&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;evaluate the blackbox function at the suggested point, and&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;update the dataset.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="step-by-step-illustration"&gt;Step-by-step Illustration&lt;/h3&gt;
&lt;p&gt;Here is a step-by-step animation of six iterations of this loop in action,
using the &lt;em&gt;Forrester&lt;/em&gt; synthetic function as an example.
The noise-free function is shown as the solid gray curve in the main pane.
This procedure is warm-started with four random initial designs.&lt;/p&gt;
&lt;p&gt;The right pane shows the empirical CDF (ECDF) of the observed $y$ values.
The vertical dashed black line in this pane is located at $\Phi(y) = \gamma$,
where $\gamma = 0.25$.
The horizontal dashed black line is located at $\tau$, the value of $y$ such
that $\Phi(y) = 0.25$, i.e. $\tau = \Phi^{-1}(0.25)$.&lt;/p&gt;
&lt;p&gt;The instances below this horizontal line are assigned binary label $z=1$, while
those above are assigned $z=0$. This is visualized in the bottom pane,
alongside the probabilistic classifier $\pi_{\boldsymbol{\theta}}(\mathbf{x})$
represented by the solid gray curve, which is trained to discriminate between
these instances.&lt;/p&gt;
&lt;p&gt;Finally, the maximizer of the classifier is represented by the vertical solid
green line.
This is the location at which the BO procedure suggests be evaluated next.&lt;/p&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;
&lt;img alt="Animation"
srcset="https://tiao.io/publications/bore-2/paper_1500x5562_hu_bf54a19b8bc6fbf5.webp 205w"
sizes="(max-width: 480px) 100vw, (max-width: 768px) 90vw, (max-width: 1024px) 80vw, 760px"
src="https://tiao.io/publications/bore-2/paper_1500x5562_hu_bf54a19b8bc6fbf5.webp"
width="205"
height="760"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p&gt;We see that the procedure converges toward to global minimum of the blackbox
function after half a dozen iterations.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;To understand how and why this works in more detail, please read our paper!
If you only have 15 minutes to spare, please watch the video recording of our
talk!&lt;/p&gt;
&lt;h2 id="video"&gt;Video&lt;/h2&gt;
&lt;div id="presentation-embed-38942425"&gt;&lt;/div&gt;
&lt;script src='https://slideslive.com/embed_presentation.js'&gt;&lt;/script&gt;
&lt;script&gt;
embed = new SlidesLiveEmbed('presentation-embed-38942425', {
presentationId: '38942425',
autoPlay: false, // change to true to autoplay the embedded presentation
verticalEnabled: true
});
&lt;/script&gt;</description></item><item><title>Simulation-based Scoring for Model-based Asynchronous Hyperparameter and Neural Architecture Search</title><link>https://tiao.io/publications/simulation-based-scoring/</link><pubDate>Sat, 01 May 2021 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/simulation-based-scoring/</guid><description/></item><item><title>An Illustrated Guide to the Knowledge Gradient Acquisition Function</title><link>https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/</link><pubDate>Thu, 18 Feb 2021 19:13:23 +0100</pubDate><guid>https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/</guid><description>
&lt;div class="callout flex px-4 py-3 mb-6 rounded-md border-l-4 bg-blue-100 dark:bg-blue-900 border-blue-500"
data-callout="note"
data-callout-metadata=""&gt;
&lt;span class="callout-icon pr-3 pt-1 text-blue-600 dark:text-blue-300"&gt;
&lt;svg height="24" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="m16.862 4.487l1.687-1.688a1.875 1.875 0 1 1 2.652 2.652L6.832 19.82a4.5 4.5 0 0 1-1.897 1.13l-2.685.8l.8-2.685a4.5 4.5 0 0 1 1.13-1.897zm0 0L19.5 7.125"/&gt;&lt;/svg&gt;
&lt;/span&gt;
&lt;div class="callout-content dark:text-neutral-300"&gt;
&lt;div class="callout-title font-semibold mb-1"&gt;Note&lt;/div&gt;
&lt;div class="callout-body"&gt;&lt;p&gt;Draft &amp;ndash; work in progress.&lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;We provide a short guide to the knowledge-gradient (KG) acquisition
function (Frazier et al., 2009)&lt;sup id="fnref:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt; for Bayesian
optimization (BO).
Rather than being a self-contained tutorial, this posts is intended to serve as
an illustrated compendium to the paper of Frazier et al., 2009&lt;sup id="fnref1:1"&gt;&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref"&gt;1&lt;/a&gt;&lt;/sup&gt;
and the subsequent tutorial by Frazier, 2018&lt;sup id="fnref:2"&gt;&lt;a href="#fn:2" class="footnote-ref" role="doc-noteref"&gt;2&lt;/a&gt;&lt;/sup&gt;, authored
nearly a decade later.&lt;/p&gt;
&lt;p&gt;This post assumes a basic level of familiarity with BO and Gaussian processes (GPs),
to the extent provided by the literature survey of Shahriari et al.,
2015&lt;sup id="fnref:3"&gt;&lt;a href="#fn:3" class="footnote-ref" role="doc-noteref"&gt;3&lt;/a&gt;&lt;/sup&gt;, and the acclaimed textbook of Rasmussen and Williams, 2006,
respectively.&lt;/p&gt;
&lt;h2 id="knowledge-gradient"&gt;Knowledge-gradient&lt;/h2&gt;
&lt;p&gt;First, we set-up the notation and terminology.
Let $f: \mathcal{X} \to \mathbb{R}$ be the blackbox function we wish to
minimize.
We denote the GP posterior predictive distribution, or &lt;em&gt;predictive&lt;/em&gt; for short,
by $p(y | \mathbf{x}, \mathcal{D})$.
The mean of the predictive, or the &lt;em&gt;predictive mean&lt;/em&gt; for short, is denoted by
&lt;/p&gt;
$$
\mu(\mathbf{x}; \mathcal{D}) = \mathbb{E}[y | \mathbf{x}, \mathcal{D}]
$$&lt;p&gt;
Let $\mathcal{D}_n$ be the set of $n$ input-output
observations $\mathcal{D}_n = \{ (\mathbf{x}_i, y_i) \}_{i=1}^n$, where
output $y_i = f(\mathbf{x}_i) + \epsilon$ is assumed to be observed with noise
$\epsilon \sim \mathcal{N}(0, \sigma^2)$.
We make the following abbreviation
&lt;/p&gt;
$$
\mu_n(\mathbf{x}) = \mu(\mathbf{x}; \mathcal{D}_n)
$$&lt;p&gt;
Next, we define the minimum of the predictive mean, or &lt;em&gt;predictive minimum&lt;/em&gt; for short,
as
&lt;/p&gt;
$$
\tau(\mathcal{D}) = \min_{\mathbf{x}' \in \mathcal{X}} \mu(\mathbf{x}'; \mathcal{D})
$$&lt;p&gt;
If we view $\mu(\mathbf{x}; \mathcal{D})$ as our fit to the underlying
function $f(\mathbf{x})$ from which the observations $\mathcal{D}$ were
generated, then $\tau(\mathcal{D})$ is our estimate of the minimum of $f(\mathbf{x})$,
given observations $\mathcal{D}$.&lt;/p&gt;
&lt;p&gt;Further, we make the following abbreviations
&lt;/p&gt;
$$
\tau_n = \tau(\mathcal{D}_n),
\qquad
\text{and}
\qquad
\tau_{n+1} = \tau(\mathcal{D}_{n+1}),
$$&lt;p&gt;
where $\mathcal{D}_{n+1} = \mathcal{D}_n \cup \{ (\mathbf{x}, y) \}$ is the
set of existing observations, augmented by some input-output pair $(\mathbf{x}, y)$.
Then, the knowledge-gradient is defined as
&lt;/p&gt;
$$
\alpha(\mathbf{x}; \mathcal{D}_n) =
\mathbb{E}_{p(y | \mathbf{x}, \mathcal{D}_n)} [ \tau_n - \tau_{n+1} ]
$$&lt;p&gt;
Crucially, note that $\tau_{n+1}$ is implicitly a function of $(\mathbf{x}, y)$,
and that this expression integrates over all possible input-output observation
pairs $(\mathbf{x}, y)$ for the given $\mathbf{x}$ under the
predictive $p(y | \mathbf{x}, \mathcal{D}_n)$.&lt;/p&gt;
&lt;h3 id="monte-carlo-estimation"&gt;Monte Carlo estimation&lt;/h3&gt;
&lt;p&gt;Not surprisingly, the knowledge-gradient function is analytically intractable.
Therefore, in practice, we compute it using Monte Carlo estimation,
&lt;/p&gt;
$$
\alpha(\mathbf{x}; \mathcal{D}_n) \approx
\frac{1}{M} \left ( \sum_{m=1}^M \tau_n - \tau_{n+1}^{(m)} \right ),
\qquad
y^{(m)} \sim p(y | \mathbf{x}, \mathcal{D}_n),
$$&lt;p&gt;
where $\tau_{n+1}^{(m)} = \tau(\mathcal{D}_{n+1}^{(m)})$
and $\mathcal{D}_{n+1}^{(m)} = \mathcal{D}_n \cup \{ (\mathbf{x}, y^{(m)}) \}$.&lt;/p&gt;
&lt;p&gt;We refer to $y^{(m)}$ as the $m$th simulated outcome, or the $m$th &lt;em&gt;simulation&lt;/em&gt;
for short.
Then, $\mathcal{D}_{n+1}^{(m)}$ is the $m$th simulation-augmented dataset and,
accordingly, $\tau_{n+1}^{(m)}$ is the $m$th simulation-augmented predictive minimum.&lt;/p&gt;
&lt;p&gt;We see that this approximation to the knowledge-gradient is simply the average
difference between the predictive minimum values &lt;em&gt;based on simulation-augmented
data&lt;/em&gt; $\tau_{n+1}^{(m)}$, and that &lt;em&gt;based on observed data&lt;/em&gt; $\tau_n$,
across $M$ simulations.&lt;/p&gt;
&lt;p&gt;This might take a moment to digest, as there are quite a number of moving parts
to keep track of. To help visualize these parts, we provide an illustration of
each of the steps required to compute KG on a simple one-dimensional synthetic
problem.&lt;/p&gt;
&lt;h2 id="one-dimensional-example"&gt;One-dimensional example&lt;/h2&gt;
&lt;p&gt;As the running example throughout this post, we use a synthetic function
defined as
&lt;/p&gt;
$$
f(x) = \sin(3x) + x^2 - 0.7 x.
$$&lt;p&gt;
We generate $n=10$ observations at locations sampled uniformly at random.
The true function, and the set of noisy observations $\mathcal{D}_n$ are
visualized in the figure below:&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/observations_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Latent blackbox function and $n=10$ observations.&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Using the observations $\mathcal{D}_n$ we have collected so far, we wish to
use KG to score a candidate location $x_c$ at which to evaluate next.&lt;/p&gt;
&lt;h2 id="posterior-predictive-distribution"&gt;Posterior predictive distribution&lt;/h2&gt;
&lt;p&gt;The posterior predictive $p(y | \mathbf{x}, \mathcal{D}_n)$ is visualized in
the figure below. In particular, the predictive mean $\mu_n(\mathbf{x})$ is
represented by the solid orange curve.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/predictive_mean_before_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Posterior predictive distribution (*before* hyperparameter estimation).&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Clearly, this is a poor fit to the data and a uncalibrated estimation of the
predictive uncertainly.&lt;/p&gt;
&lt;h3 id="step-1-hyperparameter-estimation"&gt;Step 1: Hyperparameter estimation&lt;/h3&gt;
&lt;p&gt;Therefore, first step is to optimize the hyperparameters of the GP regression
model, i.e. the kernel lengthscale, amplitude, and the observation noise variance.
We do this using type-II maximum likelihood estimation (MLE), or &lt;em&gt;empirical Bayes&lt;/em&gt;.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/predictive_mean_after_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Posterior predictive distribution (*after* hyperparameter estimation).&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id="step-2-determine-the-predictive-minimum"&gt;Step 2: Determine the predictive minimum&lt;/h3&gt;
&lt;p&gt;Next, we compute the predictive minimum $\tau_n = \min_{\mathbf{x}' \in \mathcal{X}} \mu_n(\mathbf{x}')$.
Since $\mu_n$ is end-to-end differentiable wrt to input $\mathbf{x}$, we can
simply use a multi-started quasi-Newton hill-climber such as L-BFGS.
We visualize this in the figure below, where the value of the predictive
minimum is represented by the orange horizontal dashed line, and its location is
denoted by the orange star and triangle.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/predictive_minimum_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Predictive minimum $\tau_n$.&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id="step-3-compute-simulation-augmented-predictive-means"&gt;Step 3: Compute simulation-augmented predictive means&lt;/h3&gt;
&lt;p&gt;Suppose we are scoring the candidate location $x_c = 0.1$.
For illustrative purposes, let us draw just $M=1$ sample $y_c^{(1)} \sim p(y | x_c, \mathcal{D}_n)$.
In the figure below, the candidate location $x_c$ is represented by the
vertical solid gray line, and the single simulated outcome $y_c^{(1)}$ is
represented by the filled blue dot.&lt;/p&gt;
&lt;p&gt;In general, we denote the simulation-augmented predictive mean as
&lt;/p&gt;
$$
\mu_{n+1}^{(m)}(\mathbf{x}) = \mu(\mathbf{x}; \mathcal{D}_{n+1}^{(m)}),
$$&lt;p&gt;
where
$\mathcal{D}_{n+1}^{(m)} = \mathcal{D}_n \cup \{ (\mathbf{x}, y^{(m)}) \}$
as defined earlier.&lt;/p&gt;
&lt;p&gt;Here, the simulation-augmented dataset $\mathcal{D}_{n+1}^{(1)}$ is the set
of existing observations $\mathcal{D}_n$, augmented by the simulated
input-output pair $(x_c, y_c^{(1)})$,
&lt;/p&gt;
$$
\mathcal{D}_{n+1}^{(1)} = \mathcal{D}_n \cup \{ (x_c, y_c^{(1)}) \},
$$&lt;p&gt;
and the corresponding simulation-augmented predictive mean $\mu_{n+1}^{(1)}(x)$
is represented in the figure below by the solid blue curve.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/simulated_predictive_mean_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Simulation-augmented predictive mean $\mu_{n&amp;#43;1}^{(1)}(x)$ at location $x_c = 0.1$&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3 id="step-4-compute-simulation-augmented-predictive-minimums"&gt;Step 4: Compute simulation-augmented predictive minimums&lt;/h3&gt;
&lt;p&gt;Next, we compute the simulation-augmented predictive minimum
&lt;/p&gt;
$$
\tau_{n+1}^{(1)} = \min_{\mathbf{x}' \in \mathcal{X}} \mu_{n+1}^{(1)}(\mathbf{x}')
$$&lt;p&gt;
It may not be immediately obvious, but $\mu_{n+1}^{(1)}$ is in fact also
end-to-end differentiable wrt to input $\mathbf{x}$. Therefore, we can again
appeal to an method such as L-BFGS.
We visualize this in the figure below, where the value of the simulation-augmented
predictive minimum is represented by the blue horizontal dashed line, and its
location is denoted by the blue star and triangle.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/simulated_predictive_minimum_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Simulation-augmented predictive minimum $\tau_{n&amp;#43;1}^{(1)}$ at location $x_c = 0.1$&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Taking the difference between the orange and blue horizontal dashed line will
give us an unbiased estimate of the knowledge-gradient.
However, this is likely to be a crude one, since it is based on just a single
MC sample.
To obtain a more accurate estimate, one needs to increase $M$, the number of
MC samples.&lt;/p&gt;
&lt;h4 id="samples"&gt;Samples $M &gt; 1$&lt;/h4&gt;
&lt;p&gt;Let us now consider $M=5$ samples. We draw $y_c^{(m)} \sim p(y | x_c, \mathcal{D}_n)$,
for $m = 1, \dotsc, 5$.
As before, the input location $x_c$ is represented by the vertical solid
gray line, and the corresponding simulated outcomes are represented by the
filled dots below, with varying hues from a perceptually uniform color palette
to distinguish between samples.&lt;/p&gt;
&lt;p&gt;Accordingly, the simulation-augmented predictive means
$\mu_{n+1}^{(m)}(x)$ at location $x_c = 0.1$, for $m = 1, \dotsc, 5$ are
represented by the colored curves, with hues set to that of the simulated
outcome on which the predictive distribution is based.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/bar_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Simulation-augmented predictive mean $\mu_{n&amp;#43;1}^{(m)}(x)$ at location $x_c = 0.1$, for $m = 1, \dotsc, 5$&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Next we compute the simulation-augmented predictive
minimum $\tau_{n+1}^{(m)}$, which requires minimizing
$\mu_{n+1}^{(m)}(x)$ for $m = 1, \dotsc, 5$.
These values are represented below by the horizontal dashed lines, and their
location is denoted by the stars and triangles.&lt;/p&gt;
&lt;figure&gt;&lt;img src="https://tiao.io/posts/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/figures/baz_paper_1800x1112.png"&gt;&lt;figcaption&gt;
&lt;h4&gt;Simulation-augmented predictive minimum $\tau_{n&amp;#43;1}^{(1)}$ at location $x_c = 0.1$, for $m = 1, \dotsc, 5$&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Finally, taking the average difference between the orange dashed line and every
other dashed line gives us the estimate of the knowledge gradient at
input $x_c$.&lt;/p&gt;
&lt;h2 id="links-and-further-readings"&gt;Links and Further Readings&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;In this post, we only showed a (naïve) approach to calculating the KG at a
given location.
Suffice it to say, there is still quite a gap between this and being able to
efficiently minimize KG within a sequential decision-making algorithm.
For a guide on incorporating KG in a modular and fully-fledged framework for
BO (namely
) see
&lt;/li&gt;
&lt;li&gt;Another introduction to KG:
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;Cite as:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;@article{tiao2021knowledge,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; title = &amp;#34;{A}n {I}llustrated {G}uide to the {K}nowledge {G}radient {A}cquisition {F}unction&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; author = &amp;#34;Tiao, Louis C&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; journal = &amp;#34;tiao.io&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; year = &amp;#34;2021&amp;#34;,
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; url = &amp;#34;https://tiao.io/post/an-illustrated-guide-to-the-knowledge-gradient-acquisition-function/&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;To receive updates on more posts like this, follow me on
and
!&lt;/p&gt;
&lt;div class="footnotes" role="doc-endnotes"&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;Frazier, P., Powell, W., &amp;amp; Dayanik, S. (2009).
. INFORMS Journal on Computing, 21(4), 599-613.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&amp;#160;&lt;a href="#fnref1:1" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;Frazier, P. I. (2018).
. arXiv preprint arXiv:1807.02811.&amp;#160;&lt;a href="#fnref:2" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:3"&gt;
&lt;p&gt;Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., &amp;amp; De Freitas, N. (2015).
. Proceedings of the IEEE, 104(1), 148-175.&amp;#160;&lt;a href="#fnref:3" class="footnote-backref" role="doc-backlink"&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</description></item><item><title>Contributed Talk: BORE — Bayesian Optimization by Density-Ratio Estimation</title><link>https://tiao.io/events/neurips2020-meta-learning/</link><pubDate>Fri, 11 Dec 2020 15:00:00 +0000</pubDate><guid>https://tiao.io/events/neurips2020-meta-learning/</guid><description/></item><item><title>Bayesian Optimization by Density Ratio Estimation</title><link>https://tiao.io/publications/bore-1/</link><pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/bore-1/</guid><description/></item><item><title>Model-based Asynchronous Hyperparameter and Neural Architecture Search</title><link>https://tiao.io/publications/async-multi-fidelity-hpo/</link><pubDate>Sun, 01 Mar 2020 00:00:00 +0000</pubDate><guid>https://tiao.io/publications/async-multi-fidelity-hpo/</guid><description/></item><item><title>AutoGluon</title><link>https://tiao.io/projects/autogluon/</link><pubDate>Sun, 01 Sep 2019 00:00:00 +0000</pubDate><guid>https://tiao.io/projects/autogluon/</guid><description>&lt;p&gt;
is an open-source
toolkit from AWS that automates ML for tabular, image, and text data. During
my AWS Berlin internship I was a core developer of the
-based
searcher module — described in
and later forming the basis of
.&lt;/p&gt;</description></item></channel></rss>