gpflux.optimization#

Optimization-related modules, currently just contains the NatGradModel and NatGradWrapper classes to integrate gpflow.optimizers.NaturalGradient with Keras.

Submodules#

Package Contents#

class NatGradModel(*args, **kwargs)[source]#

Bases: tf.keras.Model

This is a drop-in replacement for tf.keras.Model when constructing GPflux models using the functional Keras style, to make it work with the NaturalGradient optimizers for q(u) distributions in GP layers.

You must set the natgrad_layers property before compiling the model. Set it to the list of all GPLayers you want to train using natural gradients. You can also set it to True to include all of them.

This model’s compile() method has to be passed a list of optimizers, which must be one gpflow.optimizers.NaturalGradient instance per natgrad-trained GPLayer, followed by a regular optimizer (e.g. tf.keras.optimizers.Adam) as the last element to handle all other parameters (hyperparameters, inducing point locations).

property natgrad_layers: List[gpflux.layers.gp_layer.GPLayer]#

The list of layers in this model that should be optimized using ~gpflow.optimizers.NaturalGradient.

Getter:

Returns a list of the layers that should be trained using ~gpflow.optimizers.NaturalGradient.

Setter:

Sets the layers that should be trained using ~gpflow.optimizers.NaturalGradient. Can be an explicit list or a bool: If set to True, it will select all GPLayer instances in the model layers.

property optimizer: tensorflow.optimizers.Optimizer#

HACK to cope with Keras’s callbacks such as ReduceLROnPlateau and LearningRateScheduler having been hardcoded for a single optimizer.

train_step(data: Any) Mapping[str, Any][source]#

The logic for one training step. For more details of the implementation, see TensorFlow’s documentation of how to customize what happens in Model.fit.

class NatGradWrapper(base_model: tf.keras.Model, *args: Any, **kwargs: Any)[source]#

Bases: NatGradModel

Wraps a class-based Keras model (e.g. the return value of gpflux.models.DeepGP.as_training_model) to make it work with gpflow.optimizers.NaturalGradient optimizers. For more details, see NatGradModel.

(Note that you can also directly pass NatGradModel to the DeepGP’s default_model_class or as_training_model()’s model_class arguments.)

Todo

This class will probably be removed in the future.

Parameters:

base_model – the class-based Keras model to be wrapped

call(data: Any, training: bool | None = None) tf.Tensor | gpflow.models.model.MeanAndVariance[source]#

Calls the model on new inputs. Simply passes through to the base_model.