gpflux.encoders#
Encoders are used by LatentVariableLayer
s to
parametrize the approximate posterior distribution over latent variables.
Submodules#
Package Contents#
- class DirectlyParameterizedNormalDiag(num_data: int, latent_dim: int, means: numpy.ndarray | None = None)[source]#
Bases:
gpflux.layers.TrackableLayer
This class implements direct parameterisation of the Normally-distributed posterior of the latent variables. A mean and standard deviation to parameterise a mean-field Normal distribution for each latent variable is created and learned during training. This type of encoder is not computationally very efficient for larger datasets, but can greatly simplify training, because no neural network is required to learn an amortized mapping.
- ..note::
No amortization is used; each datapoint element has an associated mean and standard deviation. This is not compatible with mini-batching.
See Dutordoir et al. [DSHD18] for a more thorough explanation of latent variable models and encoders.
Directly parameterise the posterior of the latent variables associated with each datapoint with a diagonal multivariate Normal distribution. Note that across latent variables we assume a mean-field approximation.
See Dutordoir et al. [DSHD18] for a more thorough explanation of latent variable models and encoders.
- Parameters:
num_data – The number of datapoints,
N
.latent_dim – The dimensionality of the latent variable,
W
.means – The initialisation of the mean of the latent variable posterior distribution. (see
means
). IfNone
(the default setting), set tonp.random.randn(N, W) * 0.01
; otherwise,means
should be an array of rank two with the shape[N, W]
.
- means: gpflow.Parameter#
Each row contains the value of the mean for a latent variable in the model.
means
is a tensor of rank two with the shape[N, W]
because we have the same number of latent variables as datapoints, and each latent variable isW
-dimensional. Consequently, the mean for each latent variable is alsoW
-dimensional.
- stds: gpflow.Parameter#
Each row contains the value of the diagonal covariances for a latent variable.
stds
is a tensor of rank two with the shape[N, W]
because we have the same number of latent variables as datapoints, and each latent variable isW
-dimensional. Consequently, the diagonal elements of the square covariance matrix for each latent variable is alsoW
-dimensional.Initialised to
1e-5 * np.ones((N, W))
.