gpflux.sampling.kernel_with_feature_decomposition#

The classes in this module encapsulate kernels \(k(\cdot, \cdot)\) with their features \(\phi_i(\cdot)\) and coefficients \(\lambda_i\) so that:

\[k(x, x') = \sum_{i=0}^\infty \lambda_i \phi_i(x) \phi_i(x').\]

The kernels are used for efficient sampling. See the tutorial notebooks Efficient sampling and Weight Space Approximation with Random Fourier Features for an in-depth overview.

Module Contents#

class _ApproximateKernel(feature_functions: gpflow.keras.tf_keras.layers.Layer, feature_coefficients: gpflow.base.TensorType)[source]#

Bases: gpflow.kernels.Kernel

This class approximates a kernel by the finite feature decomposition:

\[k(x, x') = \sum_{i=0}^L \lambda_i \phi_i(x) \phi_i(x'),\]

where \(\lambda_i\) and \(\phi_i(\cdot)\) are the coefficients and features, respectively.

Parameters:
  • feature_functions – A Keras layer for which the call evaluates the L features of the kernel \(\phi_i(\cdot)\). For X with the shape [N, D], feature_functions(X) returns a tensor with the shape [N, L].

  • feature_coefficients – A tensor with the shape [L, 1] with coefficients associated with the features, \(\lambda_i\).

K(X: gpflow.base.TensorType, X2: gpflow.base.TensorType | None = None) tf.Tensor[source]#

Approximate the true kernel by an inner product between feature functions.

K_diag(X: gpflow.base.TensorType) tf.Tensor[source]#

Approximate the true kernel by an inner product between feature functions.

class KernelWithFeatureDecomposition(kernel: gpflow.kernels.Kernel | NoneType, feature_functions: gpflow.keras.tf_keras.layers.Layer, feature_coefficients: gpflow.base.TensorType)[source]#

Bases: gpflow.kernels.Kernel

This class represents a kernel together with its finite feature decomposition:

\[k(x, x') = \sum_{i=0}^L \lambda_i \phi_i(x) \phi_i(x'),\]

where \(\lambda_i\) and \(\phi_i(\cdot)\) are the coefficients and features, respectively.

The decomposition can be derived from Mercer or Bochner’s theorem. For example, feature-coefficient pairs could be eigenfunction-eigenvalue pairs (Mercer) or Fourier features with constant coefficients (Bochner).

In some cases (e.g., [1] and [2]) the left-hand side (that is, the covariance function \(k(\cdot, \cdot)\)) is unknown and the kernel can only be approximated using its feature decomposition. In other cases (e.g., [3] and [4]), both the covariance function and feature decomposition are available in closed form.

Parameters:
  • kernel

    The kernel corresponding to the feature decomposition. If None, there is no analytical expression associated with the infinite sum and we approximate the kernel based on the feature decomposition.

    Note

    In certain cases, the analytical expression for the kernel is not available. In this case, passing None is allowed, and K() and K_diag() will be computed using the approximation provided by the feature decomposition.

  • feature_functions – A Keras layer for which the call evaluates the L features of the kernel \(\phi_i(\cdot)\). For X with the shape [N, D], feature_functions(X) returns a tensor with the shape [N, L].

  • feature_coefficients – A tensor with the shape [L, 1] with coefficients associated with the features, \(\lambda_i\).

property feature_functions: gpflow.keras.tf_keras.layers.Layer[source]#

Return the kernel’s features \(\phi_i(\cdot)\).

property feature_coefficients: tf.Tensor[source]#

Return the kernel’s coefficients \(\lambda_i\).