trieste.acquisition.optimizer
#
This module contains functionality for optimizing
AcquisitionFunction
s over SearchSpace
s.
Module Contents#
-
NUM_SAMPLES_MIN
:int = 5000[source]# The default minimum number of initial samples for
generate_continuous_optimizer()
andgenerate_random_search_optimizer()
function, used for determining the number of initial samples in the multi-start acquisition function optimization.
-
NUM_SAMPLES_DIM
:int = 1000[source]# The default minimum number of initial samples per dimension of the search space for
generate_continuous_optimizer()
function inautomatic_optimizer_selector()
, used for determining the number of initial samples in the multi-start acquisition function optimization.
-
NUM_RUNS_DIM
:int = 10[source]# The default minimum number of optimization runs per dimension of the search space for
generate_continuous_optimizer()
function inautomatic_optimizer_selector()
, used for determining the number of acquisition function optimizations to be performed in parallel.
-
exception
FailedOptimizationError
[source]# Bases:
Exception
Raised when an acquisition optimizer fails to optimize
Initialize self. See help(type(self)) for accurate signature.
-
AcquisitionOptimizer
[source]# Type alias for a function that returns the single point that maximizes an acquisition function over a search space or the V points that maximize a vectorized acquisition function (as represented by an acquisition-int tuple).
If this function receives a search space with points of shape [D] and an acquisition function with input shape […, 1, D] output shape […, 1], the
AcquisitionOptimizer
return shape should be [1, D].If instead it receives a search space and a tuple containing the acquisition function and its vectorization V then the
AcquisitionOptimizer
return shape should be [V, D].
-
automatic_optimizer_selector
(space: trieste.space.SearchSpace, target_func: Union[trieste.acquisition.interface.AcquisitionFunction, Tuple[trieste.acquisition.interface.AcquisitionFunction, int]]) → trieste.types.TensorType[source]# A wrapper around our
AcquisitionOptimizer`s. This class performs an :const:`AcquisitionOptimizer
appropriate for the problem’sSearchSpace
.- Parameters
space – The space of points over which to search, for points with shape [D].
target_func – The function to maximise, with input shape […, 1, D] and output shape […, 1].
- Returns
The batch of points in
space
that maximisestarget_func
, with shape [1, D].
-
optimize_discrete
(space: trieste.space.DiscreteSearchSpace, target_func: Union[trieste.acquisition.interface.AcquisitionFunction, Tuple[trieste.acquisition.interface.AcquisitionFunction, int]]) → trieste.types.TensorType[source]# An
AcquisitionOptimizer
for :class:’DiscreteSearchSpace’ spaces.When this functions receives an acquisition-integer tuple as its target_func,it evaluates all the points in the search space for each of the individual V functions making up target_func.
- Parameters
space – The space of points over which to search, for points with shape [D].
target_func – The function to maximise, with input shape […, V, D] and output shape […, V].
- Returns
The V points in
space
that maximisestarget_func
, with shape [V, D].
-
generate_continuous_optimizer
(num_initial_samples: int = NUM_SAMPLES_MIN, num_optimization_runs: int = 10, num_recovery_runs: int = 10, optimizer_args: Optional[dict[str, Any]] = None) → AcquisitionOptimizer[Box | TaggedProductSearchSpace][source]# Generate a gradient-based optimizer for :class:’Box’ and :class:’TaggedProductSearchSpace’ spaces and batches of size 1. In the case of a :class:’TaggedProductSearchSpace’, We perform gradient-based optimization across all :class:’Box’ subspaces, starting from the best location found across a sample of num_initial_samples random points.
We advise the user to either use the default NUM_SAMPLES_MIN for num_initial_samples, or NUM_SAMPLES_DIM times the dimensionality of the search space, whichever is greater. Similarly, for num_optimization_runs, we recommend using NUM_RUNS_DIM times the dimensionality of the search space.
This optimizer uses Scipy’s L-BFGS-B optimizer. We run num_optimization_runs separate optimizations in parallel, each starting from one of the best num_optimization_runs initial query points.
If all num_optimization_runs optimizations fail to converge then we run num_recovery_runs additional runs starting from random locations (also ran in parallel).
- Parameters
num_initial_samples – The size of the random sample used to find the starting point(s) of the optimization.
num_optimization_runs – The number of separate optimizations to run.
num_recovery_runs – The maximum number of recovery optimization runs in case of failure.
optimizer_args – The keyword arguments to pass to the Scipy L-BFGS-B optimizer. Check minimize method of
optimize
for details of which arguments can be passed. Note that method, jac and bounds cannot/should not be changed.
- Returns
The acquisition optimizer.
-
_perform_parallel_continuous_optimization
(target_func: trieste.acquisition.interface.AcquisitionFunction, space: trieste.space.SearchSpace, starting_points: trieste.types.TensorType, optimizer_args: dict[str, Any]) → Tuple[trieste.types.TensorType, trieste.types.TensorType, trieste.types.TensorType, trieste.types.TensorType][source]# A function to perform parallel optimization of our acquisition functions using Scipy. We perform L-BFGS-B starting from each of the locations contained in starting_points, i.e. the number of individual optimization runs is given by the leading dimension of starting_points.
To provide a parallel implementation of Scipy’s L-BFGS-B that can leverage batch calculations with TensorFlow, this function uses the Greenlet package to run each individual optimization on micro-threads.
L-BFGS-B updates for each individual optimization are performed by independent greenlets working with Numpy arrays, however, the evaluation of our acquisition function (and its gradients) is calculated in parallel (for each optimization step) using Tensorflow.
For :class:’TaggedProductSearchSpace’ we only apply gradient updates to its :class:’Box’ subspaces, fixing the discrete elements to the best values found across the initial random search. To fix these discrete elements, we optimize over a continuous :class:’Box’ relaxation of the discrete subspaces which has equal upper and lower bounds, i.e. we specify an equality constraint for this dimension in the scipy optimizer.
This function also support the maximization of vectorized target functions (with vectorization V).
- Parameters
target_func – The function(s) to maximise, with input shape […, V, D] and output shape […, V].
space – The original search space.
starting_points – The points at which to begin our optimizations of shape [num_optimization_runs, V, D]. The leading dimension of starting_points controls the number of individual optimization runs for each of the V target functions.
optimizer_args – Keyword arguments to pass to the Scipy optimizer.
- Returns
A tuple containing the failure statuses, maximum values, maximisers and number of evaluations for each of our optimizations.
-
class
ScipyLbfgsBGreenlet
[source]# Bases:
greenlet.greenlet
Worker greenlet that runs a single Scipy L-BFGS-B. Each greenlet performs all the L-BFGS-B update steps required for an individual optimization. However, the evaluation of our acquisition function (and its gradients) is delegated back to the main Tensorflow process (the parent greenlet) where evaluations can be made efficiently in parallel.
-
get_bounds_of_box_relaxation_around_point
(space: trieste.space.TaggedProductSearchSpace, current_point: trieste.types.TensorType) → scipy.optimize.Bounds[source]# A function to return the bounds of a continuous relaxation of a :class:’TaggedProductSearchSpace’ space, i.e. replacing discrete spaces with continuous spaces. In particular, all :class:’DiscreteSearchSpace’ subspaces are replaced with a new :class:’DiscreteSearchSpace’ fixed at their respective component of the specified ‘current_point’. Note that all :class:’Box’ subspaces remain the same.
- Parameters
space – The original search space.
current_point – The point at which to make the continuous relaxation.
- Returns
Bounds for the Scipy optimizer.
-
batchify_joint
(batch_size_one_optimizer: AcquisitionOptimizer[trieste.space.SearchSpaceType], batch_size: int) → AcquisitionOptimizer[trieste.space.SearchSpaceType][source]# A wrapper around our
AcquisitionOptimizer`s. This class wraps a :const:`AcquisitionOptimizer
to allow it to jointly optimize the batch elements considered by a batch acquisition function.- Parameters
batch_size_one_optimizer – An optimizer that returns only batch size one, i.e. produces a single point with shape [1, D].
batch_size – The number of points in the batch.
- Returns
An
AcquisitionOptimizer
that will provide a batch of points with shape [B, D].
-
batchify_vectorize
(batch_size_one_optimizer: AcquisitionOptimizer[trieste.space.SearchSpaceType], batch_size: int) → AcquisitionOptimizer[trieste.space.SearchSpaceType][source]# A wrapper around our
AcquisitionOptimizer`s. This class wraps a :const:`AcquisitionOptimizer
to allow it to optimize batch acquisition functions.Unlike
batchify_joint()
,batchify_vectorize()
is suitable for aAcquisitionFunction
whose individual batch element can be optimized independently (i.e. they can be vectorized).- Parameters
batch_size_one_optimizer – An optimizer that returns only batch size one, i.e. produces a single point with shape [1, D].
batch_size – The number of points in the batch.
- Returns
An
AcquisitionOptimizer
that will provide a batch of points with shape [V, D].
-
generate_random_search_optimizer
(num_samples: int = NUM_SAMPLES_MIN) → AcquisitionOptimizer[trieste.space.SearchSpace][source]# Generate an acquisition optimizer that samples num_samples random points across the space. The default is to sample at NUM_SAMPLES_MIN locations.
We advise the user to either use the default NUM_SAMPLES_MIN for num_samples, or NUM_SAMPLES_DIM times the dimensionality of the search space, whichever is smaller.
- Parameters
num_samples – The number of random points to sample.
- Returns
The acquisition optimizer.