Soubhik Barari


Overview of Bayesian Optimization

Published 14 Sep 2016

Problem

The problem of hyperparameter optimization (i.e. configuring model parameters) in machine learning is very real and very painful.

The hyperparameter search process is essentially a “exploitation-exploration” trade-off: for how long are we willing to try out different parameters for our model in order to find an optimal configuration that’s ‘good enough’?

This particular optimization problem is difficult. Our objective function - the way we decide how good a set of hyperparameters is for our model - is often a long process involving training and validating the model to get its out-of-sample accuracy (or F1-score, etc.) and deciding if it’s ‘good enough’. Imagine doing that thousands of times! For this reason, grid search - exhaustively searching through all possible hyperparameter configurations - might give you a stellar model in the end, but you’ll be waiting… for a long time.

Random search makes the wait less painful. Previous work shows that approximately 60 rounds of random search can produce as optimal results as an entire grid search for certain models. As it turns out, not all hyperparameters are important to tune. The graphic below illustrates this.

Grid search vs. random search (http://blog.kaggle.com/2015/07/16/scikit-learn-video-8-efficiently-searching-for-optimal-tuning-parameters/)

Although this approach is nicer, it still feels like we’re stabbing in the dark for hyperparameters. More than that, as our models get more and more computationally expensive and our training/validation routines get more complex, it doesn’t feel like we’re being smart about which hyperparameter set our model should try next … given the ones it’s already tried on for size.

How can we do better?

The Bayesian way

Bayesian optimization lends itself quite perfectly for our problem of machine learning configuration for two reasons:

(1) it allows us to model the response surface of our model’s performance using a cheaper surrogate function.

(2) it can perform the minimal number of search rounds until there is no further expected improvement in the true objective function.

There are many different ways to create this “surrogate” function using various estimators (random forests, etc.).

In the standard approach to Bayesian optimization, we construct this surrogate function for the objective using a prior and with each evaluation of the true response surface, we can perform posterior predictive inference on the performance of hyperparameters we haven’t tried yet. In this manner, we can beautifully arrive at an optimum with some amount of confidence.

How it works

Our problem is defined as follows. Given some “tried” hyperparameter sets , and their observed response values (model performance) :

Can we conditionally predict the response values of new hyperparameters ?

Gaussian Process

The primary mechanism that allows us to find this posterior is the Gaussian Process (GP) . A GP is defined by any finite set of points such that any subset of points where are random variables drawn from a multivariate Gaussian distribution.

The clever thing about the structure of Gaussian Processes is consistency. If the GP assumes:

it follows that:

where is sampled from the corresponding mean subvector and covariance submatrix of the larger Gaussian.

Indeed, this means implies that if we assume a Gaussian Process prior for the generation of our observed , then for some covariance matrix, we have:

and consequently the joint distribution:

Ok. So how do we exactly model the covariance matrix, i.e. how our individual hyperparameter scores “relate” to each other and how that changes in the posterior after scoring new hyperparameters?

Kernel method

In Bayesian optimization, we assume the covariance of two scores, and , will depend on a covariance function applied to their corresponding hyperparameter sets, and a .

Recall the kernel method in machine learning: instead of performing an inner product between two high dimensional vectors and in a parameter space, we may instead define a kernel . For a simple covariance function to our GP gaussian, we can assume a squared exponential:

The squared exponential is a convenient way to model score covariance here: hyperparameter sets that are very different will have scores that barely co-vary while “nearby” hyperparameter sets will have highly co-varying scores (ah, the beauty of ).

Building up from a covariance function, we can create a covariance matrix for our prior based on observed values :

And so our full joint distribution using kernels is:

See relevant machine learning literature (perhaps even a future post) about important properties and invariants of kernels.

Posterior predictive distribution

With the beauty of the Gaussian prior, we can now derive the following closed form expression for the posterior predictive distribution for .

In Bayesian terminology, the MAP (maximum a posteriori) estimate for would simply be the point value with the single highest probability in the posterior distribution. While this is nice and simple, we’d prefer to incorporate the entire posterior distribution to weight over all all possible predictions for . This is called posterior predictive inference.

Using some math, we can write out the equation used to draw :

where:

To add one step, we assume that is drawn with some amount of Gaussian noise. I.e.

where is the noise variance of some noise factor such that .

Then we can adjust our predictive equation to include this noise factor:

Acquisition function

Ok, so we have a way of modelling our expensive hyperparameter scoring function. How do we then “intelligently” select the next hyperparameter set to try out given some past trials?

We introduce the idea of an acquisition function. Given some choice of hyperparameter vector , the acquisition function computes some loss based on some criteria, having observed some past and . We select the out of all possible that maximizes (or minimizes) this loss.

An example of this stepwise ‘select-and-observe’ process is well illustrated in this graphic:

Stepwise acquisition of 'promising' new points.

The most intuitive acquisition function is perhaps Probability of Improvement , which selects as the point that has the highest probability of an improved response value over the current observed best . The acquisition function for potential points looks like:

where is the cumulative distribution for the standard normal. Intuitively, this makes sense to use, since is the Z score of in the the Gaussian Process’s gaussian distribution.

Instead of finding the point that has some high probability estimate of improving our best score, we could instead maximize the Expected Improvement . I personally find this to be a powerful, but simple method of picking the next hyperparameter set:

Other interesting acquisition functions also exist that allow the user to tune exploration vs. exploitation of the hyperparameter space; see [2].

Code

Now onto some coding to see this stuff in action! We can actually quite easily implement a basic version of Bayesian optimization with accompanying visualizations in Python using numpy, scipy, matplotlib. If you like pretty plots, I would recommend using the seaborn library (an extension of matplotlib).

Setup

The basic header of our script:

import random
import math
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt

To ‘animate’ our Bayesian posteriors, we can turn on interactive mode in matplotlib:

plt.ion()

Some examples of basic kernel functions that we can use:

linear_kernel = lambda x,y: 1*np.dot(x,y)
sq_exp_kernel = lambda x,y: math.exp(-0.5*(np.linalg.norm(x-y))**2)

And now for the three key functions we need to make this work…

The kernel matrix builder:

def K(X0, X1, kernel=sq_exp_kernel):
    K = []
    for x0 in X0:
        k_x_x = []
        for x1 in X1:
            k_x_x.append(kernel(x0, x1))
        K.append(k_x_x)
    if len(K) == 1:
        K = K[0]
    return np.array(K)

The posterior prediction function for a new hyperparameter set:

def posterior_Mean(X, X_new, sigma, y):
    X_new = [X_new]
    return np.dot(np.dot(K(X_new, X), np.linalg.inv(K(X, X)+ sigma*np.eye(len(X)))), np.array([y]).T).flatten()

And the posterior covariance-updater, given a new point:

def posterior_Cov(X, X_new, sigma):
    X_new = [X_new]
    return K(X_new, X_new) - (np.dot(np.dot(K(X_new, X), np.linalg.inv(K(X, X) + sigma*np.eye(len(X)))), K(X, X_new)))

We can create a dummmy ‘machine learning model’ that has a single hyperparameter with a range of possible values from 0 to 10 and a simulated hyperparameter scoring function f. We’ll also assume that we’ve already observed the responses of 4 points:

f       = lambda x: np.sin(x)*x
X       = np.linspace(0, 10, 100)

X_obs   = X[[0, 10, 40, 90]]
y_obs   = map(f, X_obs)
X_tst   = np.delete(X, [0, 10, 40, 90])

Here X_tst will be the set of all hyperparameters that we will consequently predict the scores of in our Bayesian optimization process. For now, we set our noise variance to zero.

sigma = 0

Randomly select hyperparameters

The following code will randomly sample points from X_tst, evaluate their scores using f, and update the Gaussian Process posterior one point at a time, visualizing each update with 95% confidence intervals of the surrogate response function for each point.

for i, x_ in enumerate(random.sample(X_tst, 10)):

    # Plot previously observed points
    plt.scatter(X_obs, y_obs, marker="+", c="k", linewidth=0.3, s=50, label="obs")

    # Visualize 95% confidence intervals
    sigmas  = np.array([posterior_Cov(X_obs, pt, sigma)[0] for pt in X])
    y_preds = np.array([posterior_Mean(X_obs, pt, sigma, y_obs)[0] for pt in X])

    # Plot newly observed point
    plt.scatter(x_, f(x_), marker="o", c="b", s=50, label="new")
    
    # Update with new observed point
    X_obs = np.append(X_obs, x_)
    y_obs = np.append(y_obs, f(x_))

    plt.title("no noise")
    plt.xlabel("parameter choice")
    plt.ylabel("response value")
    plt.fill(np.concatenate([X, X[::-1]]), np.concatenate([y_preds + 1.96*sigmas, (y_preds - 1.96*sigmas)[::-1]]), 
           alpha=0.5, facecolor='steelblue', label="95% confidence interval")
    plt.legend()
    plt.pause(0.5)

    plt.clf()

This looks like:

This shows exactly how the GP models hyperparameter score as a function of the hyperparameter choice. However, we’re again just randomly selecting hyperparameters from the space of choices X without using any acquisition functions to guide us…

Randomly select hyperparameters with noise

Now, let’s say that we think our machine learning model’s performance isn’t very stable. This is a reasonable assumption since model performance usually differs depending on the data it’s trained on. For a “high-quality” training set, it might perform with 90% accuracy, while for a “lower-quality” sample, it might only yield 70% accuracy. Let’s set our noise variance to an arbitrary value and see how the confidence intervals for the objective function changes:

sigma = 0.2

We now predict hyperparameter scores with a little bit more uncertainty. Pretty cool! But we’re still just stabbing in the dark for hyperparameters. Let’s see what this looks like when we use our acquisition function to tell us which hyperparameter our model should try on next.

Fix rounds of hyperparameter selection using EI

An implementation of the acquisition function:

def acq(x_try, X, sigma, y):
    return (posterior_Cov(X, x_try, sigma)[0] * (gamma(x_try, X, sigma, y)*scipy.stats.norm.cdf(gamma(x_try, X, sigma, y)) + scipy.stats.norm.pdf(gamma(x_try, X, sigma, y))))
    
def gamma(x_try, X, sigma, y):
    global max_score
    try:
        return float(-max_score + posterior_Mean(X, x_try, sigma, y)[0])/float(posterior_Cov(X, x_try, sigma)[0])
    except:
        return 0.0

Now, instead of randomly selecting points from X_tst, we’ll pick the next point to evaluate using the acquisition function like so:

    acq_scores = [acq(x, X_obs, sigma, y_obs) for x in X]
    x_next = X[np.argmax(acq_scores)]
    y_new  = f(x_next)

The full code visualizing each update as well as the acquisition score for all hyperparameter choices (not on the same y-axis):

max_score = 0.0
for i in range(5):

    # Visualize acuqisition function
    acq_scores = [acq(x, X_obs, sigma, y_obs) for x in X]
    plt.fill(np.concatenate([X, X[::-1]]), np.concatenate([[0]*len(X), acq_scores[::-1]]), 
                facecolor='green', alpha=0.25, label="acq")

    # Plot previously observed points
    plt.scatter(X_obs, y_obs, marker="+", c="k", linewidth=0.3, s=50, label="obs.")

    # Visualize 95% confidence intervals
    sigmas  = np.array([posterior_Cov(X_obs, pt, sigma)[0] for pt in X])
    y_preds = np.array([posterior_Mean(X_obs, pt, sigma, y_obs)[0] for pt in X])

    plt.fill(np.concatenate([X, X[::-1]]), 
                            np.concatenate([y_preds + 1.96*sigmas, (y_preds - 1.96*sigmas)[::-1]]), 
                            alpha=0.5, facecolor='steelblue', label="95% confidence interval")

    # Pick next point
    x_next = X[np.argmax(acq_scores)]

    # Evaluate point
    y_new  = f(x_next)

    # Check if it's the 'best' we've seen
    max_score = max(max_score, y_new)

    # Plot newly observed point
    plt.scatter(x_next, y_new, marker="o", c="b", s=50)
    
    # Update
    X_obs = np.append(X_obs, x_next)
    y_obs = np.append(y_obs, y_new)

    plt.title("acquisition maximization (5 rounds)")
    plt.xlabel("parameter choice")
    plt.ylabel("response value")
    plt.legend()
    plt.pause(0.5)
    plt.clf()

Here’s what that looks like:

Note that each new point (blue dot) is chosen so as to maximize the acquisition function. Our acquistion function is now truly guiding us in finding better hyperparameters!

Hyperparameter selection using EI threshold

Lastly, we want to completely remove any hyperparameters on our hyperparameter optimizer. So instead of doing this search for some arbitrary number of rounds, let’s perform the search until the EI is below some reasonable threshold (e.g. 1.0):

while EI > 1.0:

    # Visualize acuqisition function
    acq_scores = [acq(x, X_obs, sigma, y_obs) for x in X]
    plt.fill(np.concatenate([X, X[::-1]]), np.concatenate([[0]*len(X), acq_scores[::-1]]), 
                facecolor='green', alpha=0.25, label="acq")

    # Plot previously observed points
    plt.scatter(X_obs, y_obs, marker="+", c="k", linewidth=0.3, s=50, label="obs.")

    # Visualize 95% confidence intervals
    sigmas  = np.array([posterior_Cov(X_obs, pt, sigma)[0] for pt in X])
    y_preds = np.array([posterior_Mean(X_obs, pt, sigma, y_obs)[0] for pt in X])

    plt.fill(np.concatenate([X, X[::-1]]), 
                            np.concatenate([y_preds+1.96*sigmas, (y_preds-1.96*sigmas)[::-1]]), 
                            alpha=0.5, facecolor='steelblue', label="95% confidence interval")

    # Pick next point
    EI     = max(acq_scores)
    x_next = X[np.argmax(acq_scores)]

    # Evaluate point
    y_new  = f(x_next)

    # Check if it's the 'best' we've seen
    max_score = max(max_score, y_new)

    # Plot newly observed point
    plt.scatter(x_next, y_new, marker="o", c="b", s=50)
    
    # Update
    X_obs = np.append(X_obs, x_next)
    y_obs = np.append(y_obs, y_new)

    plt.title("acquisition maximization (EI > 1.0)")
    plt.xlabel("parameter choice")
    plt.ylabel("response value")
    plt.legend()
    plt.pause(0.5)

    plt.clf()

Our hyperparameter selector picks the optimal hyperparameter set in just two rounds!

Conclusion

In summary, this post serves as a light, but hearty introduction to Bayesian optimization, a better way to fine-tune your machine learning models than just running grid search and going off and taking a nap!

Putting together all the above code, here’s the full Python implementation of Bayesian optimization. We used just a single hyperparameter dimension, but this would very well scale up to a grid of hyperparameters. Try it out for some dummy model or a machine learning model that you’re trying to tune!

Some stuff I didn’t really talk a lot about was kernel, assumptions behind the Bayesian optimization response surface, and other methods of modelling the hyperparameter performance function. Check out the related papers for more details on those nuances. Cheers!


[1] Rasmussen E., and Williams C. Gaussian Processes for Machine Learning. The MIT Press, 2006. Easily the most thorough introduction to GP applications.

[2] Snoeck J., Larochelle H., and Adams. R. Practical Bayesian Optimization of Machine Learning Algorithms. Connects the dots between theory on GP and ML optimization.