gpflux.layers.latent_variable_layer#
This module implements a latent variable layer for deep GPs.
Module Contents#
- class TrackableLayer[source]#
Bases:
gpflow.keras.tf_keras.layers.Layer
With the release of TensorFlow 2.5, our TrackableLayer workaround is no longer needed. See Prowler-io/gpflux#189. Will be removed in GPflux version 1.0.0
- ObservationType[source]#
Type for the
[inputs, targets]
list used byLayerWithObservations
- class LayerWithObservations[source]#
Bases:
gpflux.layers.trackable_layer.TrackableLayer
By inheriting from this class, Layers indicate that their
call()
method takes a second observations argument after the customary layer_inputs argument.This is used to distinguish which layers (unlike most standard Keras layers) require the original inputs and/or targets during training. For example, it is used by the amortized variational inference in the
LatentVariableLayer
.- abstract call(layer_inputs: gpflow.base.TensorType, observations: gpflux.types.ObservationType | None = None, training: bool | None = None) tf.Tensor [source]#
The
call()
method ofLayerWithObservations
subclasses should accept a second argument, observations. In training mode, this will be the[inputs, targets]
of the training points; otherwise, it isNone
.
- class LatentVariableLayer(prior: tfp.distributions.Distribution, encoder: gpflow.keras.tf_keras.layers.Layer, compositor: gpflow.keras.tf_keras.layers.Layer | None = None, name: str | None = None)[source]#
Bases:
LayerWithObservations
A latent variable layer, with amortized mean-field variational inference.
The latent variable is distribution-agnostic, but assumes a variational posterior that is fully factorised and is of the same distribution family as the prior.
This class is used by models as described in [DSHD18, SDHD19].
- Parameters:
prior – A distribution that represents the
prior
over the latent variable.encoder – A layer which is passed the concatenated observation inputs and targets, and returns the appropriate parameters for the approximate posterior distribution; see
encoder
.compositor – A layer that combines layer inputs and latent variable samples into a single tensor; see
compositor
. If you do not specify a value for this parameter, the default istf.keras.layers.Concatenate(axis=-1, dtype=default_float())
. Note that you should setdtype
of the layer to GPflow’s default dtype as indefault_float()
.name – The name of this layer (passed through to
tf.keras.layers.Layer
).
- prior: tfp.distributions.Distribution[source]#
The prior distribution for the latent variables.
- encoder: gpflow.keras.tf_keras.layers.Layer[source]#
An encoder that maps from a concatenation of inputs and targets to the parameters of the approximate posterior distribution of the corresponding latent variables.
- compositor: gpflow.keras.tf_keras.layers.Layer[source]#
A layer that takes as input the two-element
[layer_inputs, latent_variable_samples]
list and combines the elements into a single output tensor.
- call(layer_inputs: gpflow.base.TensorType, observations: gpflux.types.ObservationType | None = None, training: bool | None = None, seed: int | None = None) tf.Tensor [source]#
Sample the latent variables and compose them with the layer input.
When training, draw a sample of the latent variable from the posterior, whose distribution is parameterised by the encoder mapping from the data. Also add a KL divergence [posterior∥prior] to the losses.
When not training, draw a sample of the latent variable from the prior.
- Parameters:
layer_inputs – The output of the previous layer.
observations – The
[inputs, targets]
, with the shapes[batch size, Din]
and[batch size, Dout]
respectively. This parameter should be passed only when in training mode.training – The training mode indicator.
seed – A random seed for the sampling operation.
- Returns:
Samples of the latent variable composed with the layer inputs through the
compositor
- _inference_posteriors(observations: gpflux.types.ObservationType, training: bool | None = None) tfp.distributions.Distribution [source]#
Return the posterior distributions parametrised by the
encoder
, which gets called with the concatenation of the inputs and targets in the observations argument.Todo
We might want to change encoders to have a
tfp.layers.DistributionLambda
final layer that directly returns the appropriately parameterised distributions object.- Parameters:
observations – The
[inputs, targets]
, with the shapes[batch size, Din]
and[batch size, Dout]
respectively.training – The training mode indicator (passed through to the
encoder
’s call).
- Returns:
The posterior distributions object.
- _inference_latent_samples_and_loss(layer_inputs: gpflow.base.TensorType, observations: gpflux.types.ObservationType, seed: int | None = None) Tuple[tf.Tensor, tf.Tensor] [source]#
Sample latent variables during the training forward pass, hence requiring the observations. Also return the KL loss per datapoint.
- Parameters:
layer_inputs – The output of the previous layer _(unused)_.
observations – The
[inputs, targets]
, with the shapes[batch size, Din]
and[batch size, Dout]
respectively.seed – A random seed for the sampling operation.
- Returns:
The samples and the loss-per-datapoint.
- _prediction_latent_samples(layer_inputs: gpflow.base.TensorType, seed: int | None = None) tf.Tensor [source]#
Sample latent variables during the prediction forward pass, only depending on the shape of this layer’s inputs.
- Parameters:
layer_inputs – The output of the previous layer (for determining batch shape).
seed – A random seed for the sampling operation.
- Returns:
The samples.
- _local_kls(posteriors: tfp.distributions.Distribution) tf.Tensor [source]#
Compute the KL divergences [posteriors∥prior].
- Parameters:
posteriors – A distribution that represents the approximate posteriors.
- Returns:
The KL divergences from the prior for each of the posteriors.