gpflux.architectures#

Pre-specified architectures

Submodules#

Package Contents#

class Config[source]#

The configuration used by build_constant_input_dim_deep_gp().

num_inducing: int#

The number of inducing variables, M. The Deep GP uses the same number of inducing variables in each layer.

inner_layer_qsqrt_factor: float#

A multiplicative factor used to rescale the hidden layers’ q_sqrt. Typically this value is chosen to be small (e.g., 1e-5) to reduce noise at the start of training.

likelihood_noise_variance: float#

The variance of the Gaussian likelihood that is used by the Deep GP.

whiten: bool = True#

Determines the parameterisation of the inducing variables. If True, :math:p(u) = N(0, I), otherwise :math:p(u) = N(0, K_{uu}). .. seealso:: gpflux.layers.GPLayer.whiten

build_constant_input_dim_deep_gp(X: numpy.ndarray, num_layers: int, config: Config) gpflux.models.DeepGP[source]#

Build a Deep GP consisting of num_layers GPLayers. All the hidden layers have the same input dimension as the data, that is, X.shape[1].

The architecture is largely based on Salimbeni and Deisenroth [SD17], with the most notable difference being that we keep the hidden dimension equal to the input dimensionality of the data.

Note

This architecture might be slow for high-dimensional data.

Note

This architecture assumes a Gaussian likelihood for regression tasks. Specify a different likelihood for performing other tasks such as classification.

Parameters:
  • X – The training input data, used to retrieve the number of datapoints and the input dimension and to initialise the inducing point locations using k-means. A tensor of rank two with the dimensions [num_data, input_dim].

  • num_layers – The number of layers in the Deep GP.

  • config – The configuration for (hyper)parameters. See Config for details.