gpflux.models.deep_gp#

This module provides the base implementation for DeepGP models.

Module Contents#

class LayerWithObservations[source]#

Bases: gpflux.layers.trackable_layer.TrackableLayer

By inheriting from this class, Layers indicate that their call() method takes a second observations argument after the customary layer_inputs argument.

This is used to distinguish which layers (unlike most standard Keras layers) require the original inputs and/or targets during training. For example, it is used by the amortized variational inference in the LatentVariableLayer.

abstract call(layer_inputs: gpflow.base.TensorType, observations: gpflux.types.ObservationType | None = None, training: bool | None = None) tf.Tensor[source]#

The call() method of LayerWithObservations subclasses should accept a second argument, observations. In training mode, this will be the [inputs, targets] of the training points; otherwise, it is None.

class LikelihoodLayer(likelihood: gpflow.likelihoods.Likelihood)[source]#

Bases: gpflux.layers.trackable_layer.TrackableLayer

A Keras layer that wraps a GPflow Likelihood. This layer expects a tfp.distributions.MultivariateNormalDiag as its input, describing q(f). When training, calling this class computes the negative variational expectation \(-\mathbb{E}_{q(f)}[\log p(y|f)]\) and adds it as a layer loss. When not training, it computes the mean and variance of y under q(f) using predict_mean_and_var().

Note

Use either this LikelihoodLayer (together with gpflux.models.DeepGP) or LikelihoodLoss (e.g. together with a tf.keras.Sequential model). Do not use both at once because this would add the loss twice.

call(inputs: tfp.distributions.MultivariateNormalDiag, targets: gpflow.base.TensorType | None = None, training: bool = None) LikelihoodOutputs[source]#

When training (training=True), this method computes variational expectations (data-fit loss) and adds this information as a layer loss. When testing (the default), it computes the posterior mean and variance of y.

Parameters:

inputs – The output distribution of the previous layer. This is currently expected to be a MultivariateNormalDiag; that is, the preceding GPLayer should have full_cov=full_output_cov=False.

Returns:

a LikelihoodOutputs tuple with the mean and variance of f and, if not training, the mean and variance of y.

Todo

Turn this layer into a DistributionLambda as well and return the correct Distribution instead of a tuple containing mean and variance only.

class Sample[source]#

Bases: abc.ABC

This class represents a sample from a GP that you can evaluate by using the __call__ at new locations within the support of the GP.

Importantly, the same function draw (sample) is evaluated when calling it multiple times. This property is called consistency. Achieving consistency for vanilla GPs is costly because it scales cubically with the number of evaluation points, but works with any kernel. It is implemented in _efficient_sample_conditional_gaussian(). For KernelWithFeatureDecomposition, the more efficient approach following Wilson et al. [WBT+20] is implemented in _efficient_sample_matheron_rule().

See the tutorial notebooks Efficient sampling and Weight Space Approximation with Random Fourier Features for an in-depth overview.

abstract __call__(X: gpflow.base.TensorType) tf.Tensor[source]#

Return the evaluation of the GP sample \(f(X)\) for \(f \sim GP(0, k)\).

Parameters:

X – The inputs, a tensor with the shape [N, D], where D is the input dimensionality.

Returns:

Function values, a tensor with the shape [N, P], where P is the output dimensionality.

__add__(other: Sample | Callable[[gpflow.base.TensorType], gpflow.base.TensorType]) Sample[source]#

Allow for the summation of two instances that implement the __call__ method.

class DeepGP(f_layers: List[gpflow.keras.tf_keras.layers.Layer], likelihood: gpflux.layers.LikelihoodLayer | gpflow.likelihoods.Likelihood, *, input_dim: int | None = None, target_dim: int | None = None, default_model_class: Type[gpflow.keras.tf_keras.Model] = tf_keras.Model, num_data: int | None = None)[source]#

Bases: gpflow.base.Module

This class combines a sequential function model f(x) = fₙ(⋯ (f₂(f₁(x)))) and a likelihood p(y|f).

Layers might depend on both inputs x and targets y during training by inheriting from LayerWithObservations; those will be passed the argument observations=[inputs, targets].

When data is used with methods in this class (e.g. predict_f() method), it needs to be with dtype corresponding to GPflow’s default dtype as in default_float().

Note

This class is not a tf.keras.Model subclass itself. To access Keras features, call either as_training_model() or as_prediction_model() (depending on the use-case) to create a tf.keras.Model instance. See the method docstrings for more details.

Parameters:
  • f_layers – The layers [f₁, f₂, …, fₙ] describing the latent function f(x) = fₙ(⋯ (f₂(f₁(x)))).

  • likelihood – The layer for the likelihood p(y|f). If this is a GPflow likelihood, it will be wrapped in a LikelihoodLayer. Alternatively, you can provide a LikelihoodLayer explicitly.

  • input_dim – The input dimensionality.

  • target_dim – The target dimensionality.

  • default_model_class – The default for the model_class argument of as_training_model() and as_prediction_model(); see the default_model_class attribute.

  • num_data – The number of points in the training dataset; see the num_data attribute. If you do not specify a value for this parameter explicitly, it is automatically detected from the num_data attribute in the GP layers.

f_layers: List[gpflow.keras.tf_keras.layers.Layer][source]#

A list of all layers in this DeepGP (just likelihood_layer is separate).

likelihood_layer: gpflux.layers.LikelihoodLayer[source]#

The likelihood layer.

default_model_class: Type[gpflow.keras.tf_keras.Model][source]#

The default for the model_class argument of as_training_model() and as_prediction_model(). This must have the same semantics as tf.keras.Model, that is, it must accept a list of inputs and an output. This could be tf.keras.Model itself or gpflux.optimization.NatGradModel (but not, for example, tf.keras.Sequential).

num_data: int[source]#

The number of points in the training dataset. This information is used to obtain correct scaling between the data-fit and the KL term in the evidence lower bound (elbo()).

static _validate_num_data(f_layers: List[gpflow.keras.tf_keras.layers.Layer], num_data: int | None = None) int[source]#

Check that the num_data attributes of all layers in f_layers are consistent with each other and with the (optional) num_data argument.

Returns:

The validated number of datapoints.

static _validate_dtype(x: gpflow.base.TensorType) None[source]#

Check that data x is of correct dtype, corresponding to GPflow’s default dtype as defined by default_float().

Raises:

ValueError – If x is of incorrect dtype.

_evaluate_deep_gp(inputs: gpflow.base.TensorType, targets: gpflow.base.TensorType | None, training: bool | None = None) tf.Tensor[source]#

Evaluate f(x) = fₙ(⋯ (f₂(f₁(x)))) on the inputs argument.

Layers that inherit from LayerWithObservations are passed the additional keyword argument observations=[inputs, targets] if targets contains a value, or observations=None when targets is None.

_evaluate_likelihood(f_outputs: gpflow.base.TensorType, targets: gpflow.base.TensorType | None, training: bool | None = None) tf.Tensor[source]#

Call the likelihood_layer on f_outputs, which adds the corresponding layer loss when training.

predict_f(inputs: gpflow.base.TensorType) Tuple[tf.Tensor, tf.Tensor][source]#
Returns:

The mean and variance (not the scale!) of f, for compatibility with GPflow models.

Raises:

ValueError – If x is of incorrect dtype.

Note

This method does not support full_cov or full_output_cov.

elbo(data: Tuple[gpflow.base.TensorType, gpflow.base.TensorType]) tf.Tensor[source]#
Returns:

The ELBO (not the per-datapoint loss!), for compatibility with GPflow models.

as_training_model(model_class: Type[gpflow.keras.tf_keras.Model] | None = None) gpflow.keras.tf_keras.Model[source]#

Construct a tf.keras.Model instance that requires you to provide both inputs and targets to its call. This information is required for training the model, because the targets need to be passed to the likelihood_layer (and to LayerWithObservations instances such as LatentVariableLayers, if present).

When compiling the returned model, do not provide any additional losses (this is handled by the likelihood_layer).

Train with

model.compile(optimizer)  # do NOT pass a loss here
model.fit({"inputs": X, "targets": Y}, ...)

See Keras’s Endpoint layer pattern for more details.

Note

Use as_prediction_model if you want only to predict, and do not want to pass in a dummy array for the targets.

Parameters:

model_class – The model class to use; overrides default_model_class.

as_prediction_model(model_class: Type[gpflow.keras.tf_keras.Model] | None = None) gpflow.keras.tf_keras.Model[source]#

Construct a tf.keras.Model instance that requires only inputs, which means you do not have to provide dummy target values when predicting at test points.

Predict with

model.predict(Xtest, ...)

Note

The returned model will not support training; for that, use as_training_model.

Parameters:

model_class – The model class to use; overrides default_model_class.