In this talk, we examine the relationship between Bayesian optimization (BO) and density-ratio estimation (DRE). BO is among the most effective and widely-used blackbox optimization methods; it works by proposing solutions according to an explore-exploit trade-off criterion encoded in an acquisition function, many of which are computed from the posterior predictive of a probabilistic surrogate model. Prevalent among these is the expected improvement (EI) function. However, the need to ensure analytical tractability of the predictive can often pose limitations that hinder the efficiency and applicability of BO. This talk discusses how to cast the computation of EI as a binary classification problem, by building on the link between class-probability estimation and density-ratio estimation, and the lesser-known link between density-ratios and EI. We demonstrate that, by circumventing the tractability constraints, this reformulation provides numerous advantages, not least in terms of expressiveness, versatility, and scalability.