gpflux.encoders.directly_parameterized_encoder#
An “encoder” for parametrizing latent variables. Does not work with mini-batching.
Module Contents#
- exception EncoderInitializationError[source]#
Bases:
Exception
This exception is raised by an encoder (e.g.
DirectlyParameterizedNormalDiag
) when parameters are not initialised correctly.Initialize self. See help(type(self)) for accurate signature.
- class TrackableLayer[source]#
Bases:
gpflow.keras.tf_keras.layers.Layer
With the release of TensorFlow 2.5, our TrackableLayer workaround is no longer needed. See Prowler-io/gpflux#189. Will be removed in GPflux version 1.0.0
- class DirectlyParameterizedNormalDiag(num_data: int, latent_dim: int, means: numpy.ndarray | None = None)[source]#
Bases:
gpflux.layers.TrackableLayer
This class implements direct parameterisation of the Normally-distributed posterior of the latent variables. A mean and standard deviation to parameterise a mean-field Normal distribution for each latent variable is created and learned during training. This type of encoder is not computationally very efficient for larger datasets, but can greatly simplify training, because no neural network is required to learn an amortized mapping.
- ..note::
No amortization is used; each datapoint element has an associated mean and standard deviation. This is not compatible with mini-batching.
See Dutordoir et al. [DSHD18] for a more thorough explanation of latent variable models and encoders.
Directly parameterise the posterior of the latent variables associated with each datapoint with a diagonal multivariate Normal distribution. Note that across latent variables we assume a mean-field approximation.
See Dutordoir et al. [DSHD18] for a more thorough explanation of latent variable models and encoders.
- Parameters:
num_data – The number of datapoints,
N
.latent_dim – The dimensionality of the latent variable,
W
.means – The initialisation of the mean of the latent variable posterior distribution. (see
means
). IfNone
(the default setting), set tonp.random.randn(N, W) * 0.01
; otherwise,means
should be an array of rank two with the shape[N, W]
.
- means: gpflow.Parameter[source]#
Each row contains the value of the mean for a latent variable in the model.
means
is a tensor of rank two with the shape[N, W]
because we have the same number of latent variables as datapoints, and each latent variable isW
-dimensional. Consequently, the mean for each latent variable is alsoW
-dimensional.
- stds: gpflow.Parameter[source]#
Each row contains the value of the diagonal covariances for a latent variable.
stds
is a tensor of rank two with the shape[N, W]
because we have the same number of latent variables as datapoints, and each latent variable isW
-dimensional. Consequently, the diagonal elements of the square covariance matrix for each latent variable is alsoW
-dimensional.Initialised to
1e-5 * np.ones((N, W))
.